org.apache.flink.streaming.api.functions.source.SourceFunction Java Examples
The following examples show how to use
org.apache.flink.streaming.api.functions.source.SourceFunction.
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example #1
Source File: KinesisDataFetcher.java From flink with Apache License 2.0 | 6 votes |
/** * Creates a Kinesis Data Fetcher. * * @param streams the streams to subscribe to * @param sourceContext context of the source function * @param runtimeContext this subtask's runtime context * @param configProps the consumer configuration properties * @param deserializationSchema deserialization schema */ public KinesisDataFetcher(List<String> streams, SourceFunction.SourceContext<T> sourceContext, RuntimeContext runtimeContext, Properties configProps, KinesisDeserializationSchema<T> deserializationSchema, KinesisShardAssigner shardAssigner, AssignerWithPeriodicWatermarks<T> periodicWatermarkAssigner, WatermarkTracker watermarkTracker) { this(streams, sourceContext, sourceContext.getCheckpointLock(), runtimeContext, configProps, deserializationSchema, shardAssigner, periodicWatermarkAssigner, watermarkTracker, new AtomicReference<>(), new ArrayList<>(), createInitialSubscribedStreamsToLastDiscoveredShardsState(streams), KinesisProxy::create); }
Example #2
Source File: IngressToSourceFunctionTranslator.java From flink-statefun with Apache License 2.0 | 6 votes |
private DecoratedSource sourceFromSpec(IngressIdentifier<?> key, IngressSpec<?> spec) { SourceProvider provider = universe.sources().get(spec.type()); if (provider == null) { throw new IllegalStateException( "Unable to find a source translation for ingress of type " + spec.type() + ", which is bound for key " + key); } SourceFunction<?> source = provider.forSpec(spec); if (source == null) { throw new NullPointerException( "A source provider for type " + spec.type() + ", has produced a NULL source."); } return DecoratedSource.of(spec, source); }
Example #3
Source File: CheckpointExceptionHandlerConfigurationTest.java From Flink-CEPplus with Apache License 2.0 | 6 votes |
public void doTestPropagationFromCheckpointConfig(boolean failTaskOnCheckpointErrors) throws Exception { StreamExecutionEnvironment streamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment(); streamExecutionEnvironment.setParallelism(1); streamExecutionEnvironment.getCheckpointConfig().setCheckpointInterval(1000); streamExecutionEnvironment.getCheckpointConfig().setFailOnCheckpointingErrors(failTaskOnCheckpointErrors); streamExecutionEnvironment.addSource(new SourceFunction<Integer>() { @Override public void run(SourceContext<Integer> ctx) throws Exception { } @Override public void cancel() { } }).addSink(new DiscardingSink<>()); StreamGraph streamGraph = streamExecutionEnvironment.getStreamGraph(); JobGraph jobGraph = StreamingJobGraphGenerator.createJobGraph(streamGraph); SerializedValue<ExecutionConfig> serializedExecutionConfig = jobGraph.getSerializedExecutionConfig(); ExecutionConfig executionConfig = serializedExecutionConfig.deserializeValue(Thread.currentThread().getContextClassLoader()); Assert.assertEquals(failTaskOnCheckpointErrors, executionConfig.isFailTaskOnCheckpointError()); }
Example #4
Source File: DynamoDBStreamsDataFetcher.java From flink with Apache License 2.0 | 6 votes |
/** * Constructor. * * @param streams list of streams to fetch data * @param sourceContext source context * @param runtimeContext runtime context * @param configProps config properties * @param deserializationSchema deserialization schema * @param shardAssigner shard assigner */ public DynamoDBStreamsDataFetcher(List<String> streams, SourceFunction.SourceContext<T> sourceContext, RuntimeContext runtimeContext, Properties configProps, KinesisDeserializationSchema<T> deserializationSchema, KinesisShardAssigner shardAssigner) { super(streams, sourceContext, sourceContext.getCheckpointLock(), runtimeContext, configProps, deserializationSchema, shardAssigner, null, null, new AtomicReference<>(), new ArrayList<>(), createInitialSubscribedStreamsToLastDiscoveredShardsState(streams), // use DynamoDBStreamsProxy DynamoDBStreamsProxy::create); }
Example #5
Source File: PulsarRowFetcher.java From pulsar-flink with Apache License 2.0 | 6 votes |
public PulsarRowFetcher( SourceFunction.SourceContext<Row> sourceContext, Map<String, MessageId> seedTopicsWithInitialOffsets, SerializedValue<AssignerWithPeriodicWatermarks<Row>> watermarksPeriodic, SerializedValue<AssignerWithPunctuatedWatermarks<Row>> watermarksPunctuated, ProcessingTimeService processingTimeProvider, long autoWatermarkInterval, ClassLoader userCodeClassLoader, StreamingRuntimeContext runtimeContext, ClientConfigurationData clientConf, Map<String, Object> readerConf, int pollTimeoutMs, DeserializationSchema<Row> deserializer, PulsarMetadataReader metadataReader) throws Exception { super(sourceContext, seedTopicsWithInitialOffsets, watermarksPeriodic, watermarksPunctuated, processingTimeProvider, autoWatermarkInterval, userCodeClassLoader, runtimeContext, clientConf, readerConf, pollTimeoutMs, deserializer, metadataReader); }
Example #6
Source File: FlinkPulsarSourceTest.java From pulsar-flink with Apache License 2.0 | 6 votes |
public TestingFetcher( SourceFunction.SourceContext<T> sourceContext, Map<String, MessageId> seedTopicsWithInitialOffsets, SerializedValue<AssignerWithPeriodicWatermarks<T>> watermarksPeriodic, SerializedValue<AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated, ProcessingTimeService processingTimeProvider, long autoWatermarkInterval) throws Exception { super( sourceContext, seedTopicsWithInitialOffsets, watermarksPeriodic, watermarksPunctuated, processingTimeProvider, autoWatermarkInterval, TestingFetcher.class.getClassLoader(), null, null, null, 0, null, null); }
Example #7
Source File: StreamExecutionEnvironment.java From flink with Apache License 2.0 | 6 votes |
/** * Creates a data stream from the given non-empty collection. * * <p>Note that this operation will result in a non-parallel data stream source, * i.e., a data stream source with parallelism one. * * @param data * The collection of elements to create the data stream from * @param typeInfo * The TypeInformation for the produced data stream * @param <OUT> * The type of the returned data stream * @return The data stream representing the given collection */ public <OUT> DataStreamSource<OUT> fromCollection(Collection<OUT> data, TypeInformation<OUT> typeInfo) { Preconditions.checkNotNull(data, "Collection must not be null"); // must not have null elements and mixed elements FromElementsFunction.checkCollection(data, typeInfo.getTypeClass()); SourceFunction<OUT> function; try { function = new FromElementsFunction<>(typeInfo.createSerializer(getConfig()), data); } catch (IOException e) { throw new RuntimeException(e.getMessage(), e); } return addSource(function, "Collection Source", typeInfo).setParallelism(1); }
Example #8
Source File: Main.java From flink-learning with Apache License 2.0 | 6 votes |
public static void main(String[] args) throws Exception { //创建流运行环境 StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.getConfig().setGlobalJobParameters(ParameterTool.fromArgs(args)); env.setParallelism(1); env.addSource(new SourceFunction<Long>() { @Override public void run(SourceContext<Long> sourceContext) throws Exception { while (true) { sourceContext.collect(System.currentTimeMillis()); } } @Override public void cancel() { } }) .map((MapFunction<Long, Long>) aLong -> aLong / 1000).setParallelism(3) .print(); env.execute("zhisheng RestartStrategy example"); }
Example #9
Source File: DataStreamAllroundTestJobFactory.java From flink with Apache License 2.0 | 6 votes |
static SourceFunction<Event> createEventSource(ParameterTool pt) { return new SequenceGeneratorSource( pt.getInt( SEQUENCE_GENERATOR_SRC_KEYSPACE.key(), SEQUENCE_GENERATOR_SRC_KEYSPACE.defaultValue()), pt.getInt( SEQUENCE_GENERATOR_SRC_PAYLOAD_SIZE.key(), SEQUENCE_GENERATOR_SRC_PAYLOAD_SIZE.defaultValue()), pt.getLong( SEQUENCE_GENERATOR_SRC_EVENT_TIME_MAX_OUT_OF_ORDERNESS.key(), SEQUENCE_GENERATOR_SRC_EVENT_TIME_MAX_OUT_OF_ORDERNESS.defaultValue()), pt.getLong( SEQUENCE_GENERATOR_SRC_EVENT_TIME_CLOCK_PROGRESS.key(), SEQUENCE_GENERATOR_SRC_EVENT_TIME_CLOCK_PROGRESS.defaultValue()), pt.getLong( SEQUENCE_GENERATOR_SRC_SLEEP_TIME.key(), SEQUENCE_GENERATOR_SRC_SLEEP_TIME.defaultValue()), pt.getLong( SEQUENCE_GENERATOR_SRC_SLEEP_AFTER_ELEMENTS.key(), SEQUENCE_GENERATOR_SRC_SLEEP_AFTER_ELEMENTS.defaultValue())); }
Example #10
Source File: FailureRateRestartStrategyMain.java From flink-learning with Apache License 2.0 | 6 votes |
public static void main(String[] args) throws Exception { StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.getConfig().setGlobalJobParameters(ParameterTool.fromArgs(args)); //每隔 10s 重启一次,如果两分钟内重启过三次则停止 Job env.setRestartStrategy(RestartStrategies.failureRateRestart(3, Time.minutes(2), Time.seconds(10))); env.addSource(new SourceFunction<Long>() { @Override public void run(SourceContext<Long> sourceContext) throws Exception { while (true) { sourceContext.collect(null); } } @Override public void cancel() { } }) .map((MapFunction<Long, Long>) aLong -> aLong / 1) .print(); env.execute("zhisheng failureRate Restart Strategy example"); }
Example #11
Source File: FixedDelayRestartStrategyMain.java From flink-learning with Apache License 2.0 | 6 votes |
public static void main(String[] args) throws Exception { //创建流运行环境 StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.getConfig().setGlobalJobParameters(ParameterTool.fromArgs(args)); //每隔 5s 重启一次,尝试三次如果 Job 还没有起来则停止 env.setRestartStrategy(RestartStrategies.fixedDelayRestart(3, 5000)); env.addSource(new SourceFunction<Long>() { @Override public void run(SourceContext<Long> sourceContext) throws Exception { while (true) { sourceContext.collect(null); } } @Override public void cancel() { } }) .map((MapFunction<Long, Long>) aLong -> aLong / 1) .print(); env.execute("zhisheng fixedDelay Restart Strategy example"); }
Example #12
Source File: AEMain.java From flink-learning with Apache License 2.0 | 6 votes |
public static void main(String[] args) throws Exception { //创建流运行环境 StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.getConfig().setGlobalJobParameters(ParameterTool.fromArgs(args)); //每隔 5s 重启一次,尝试三次如果 Job 还没有起来则停止 env.setRestartStrategy(RestartStrategies.fixedDelayRestart(3, 5000)); env.addSource(new SourceFunction<Long>() { @Override public void run(SourceContext<Long> sourceContext) throws Exception { while (true) { sourceContext.collect(System.currentTimeMillis()); } } @Override public void cancel() { } }) .map((MapFunction<Long, Long>) aLong -> aLong / 0) .print(); env.execute("zhisheng RestartStrategy example"); }
Example #13
Source File: NiFiSourceTopologyExample.java From flink with Apache License 2.0 | 6 votes |
public static void main(String[] args) throws Exception { StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); SiteToSiteClientConfig clientConfig = new SiteToSiteClient.Builder() .url("http://localhost:8080/nifi") .portName("Data for Flink") .requestBatchCount(5) .buildConfig(); SourceFunction<NiFiDataPacket> nifiSource = new NiFiSource(clientConfig); DataStream<NiFiDataPacket> streamSource = env.addSource(nifiSource).setParallelism(2); DataStream<String> dataStream = streamSource.map(new MapFunction<NiFiDataPacket, String>() { @Override public String map(NiFiDataPacket value) throws Exception { return new String(value.getContent(), Charset.defaultCharset()); } }); dataStream.print(); env.execute(); }
Example #14
Source File: RoutableProtobufKafkaSourceProviderTest.java From flink-statefun with Apache License 2.0 | 6 votes |
@Test public void exampleUsage() { JsonNode ingressDefinition = loadAsJsonFromClassResource( getClass().getClassLoader(), "routable-protobuf-kafka-ingress.yaml"); JsonIngressSpec<?> spec = new JsonIngressSpec<>( ProtobufKafkaIngressTypes.ROUTABLE_PROTOBUF_KAFKA_INGRESS_TYPE, new IngressIdentifier<>(Message.class, "foo", "bar"), ingressDefinition); RoutableProtobufKafkaSourceProvider provider = new RoutableProtobufKafkaSourceProvider(); SourceFunction<?> source = provider.forSpec(spec); assertThat(source, instanceOf(FlinkKafkaConsumer.class)); }
Example #15
Source File: CheckpointExceptionHandlerConfigurationTest.java From flink with Apache License 2.0 | 6 votes |
public void doTestPropagationFromCheckpointConfig(boolean failTaskOnCheckpointErrors) { StreamExecutionEnvironment streamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment(); streamExecutionEnvironment.setParallelism(1); streamExecutionEnvironment.getCheckpointConfig().setCheckpointInterval(1000); streamExecutionEnvironment.getCheckpointConfig().setFailOnCheckpointingErrors(failTaskOnCheckpointErrors); streamExecutionEnvironment.addSource(new SourceFunction<Integer>() { @Override public void run(SourceContext<Integer> ctx) { } @Override public void cancel() { } }).addSink(new DiscardingSink<>()); }
Example #16
Source File: HBaseStreamWriteMain.java From flink-learning with Apache License 2.0 | 5 votes |
public static void main(String[] args) throws Exception { final ParameterTool parameterTool = ExecutionEnvUtil.createParameterTool(args); StreamExecutionEnvironment env = ExecutionEnvUtil.prepare(parameterTool); Properties props = KafkaConfigUtil.buildKafkaProps(parameterTool); /*env.addSource(new FlinkKafkaConsumer011<>( parameterTool.get(METRICS_TOPIC), //这个 kafka topic 需要和上面的工具类的 topic 一致 new SimpleStringSchema(), props)) .writeUsingOutputFormat(new HBaseOutputFormat());*/ DataStream<String> dataStream = env.addSource(new SourceFunction<String>() { private static final long serialVersionUID = 1L; private volatile boolean isRunning = true; @Override public void run(SourceContext<String> out) throws Exception { while (isRunning) { out.collect(String.valueOf(Math.floor(Math.random() * 100))); } } @Override public void cancel() { isRunning = false; } }); dataStream.writeUsingOutputFormat(new HBaseOutputFormat()); env.execute("Flink HBase connector sink"); }
Example #17
Source File: PulsarFetcherTest.java From pulsar-flink with Apache License 2.0 | 5 votes |
public TestFetcher( SourceFunction.SourceContext<T> sourceContext, Map<String, MessageId> seedTopicsWithInitialOffsets, SerializedValue<AssignerWithPeriodicWatermarks<T>> watermarksPeriodic, SerializedValue<AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated, ProcessingTimeService processingTimeProvider, long autoWatermarkInterval, OneShotLatch fetchLoopWaitLatch, OneShotLatch stateIterationBlockLatch) throws Exception { super( sourceContext, seedTopicsWithInitialOffsets, watermarksPeriodic, watermarksPunctuated, processingTimeProvider, autoWatermarkInterval, TestFetcher.class.getClassLoader(), null, null, null, 0, null, null); this.fetchLoopWaitLatch = fetchLoopWaitLatch; this.stateIterationBlockLatch = stateIterationBlockLatch; }
Example #18
Source File: ProtobufKafkaSourceProviderTest.java From stateful-functions with Apache License 2.0 | 5 votes |
@Test public void exampleUsage() { JsonNode ingressDefinition = fromPath("protobuf-kafka-ingress.yaml"); JsonIngressSpec<?> spec = new JsonIngressSpec<>( Constants.PROTOBUF_KAFKA_INGRESS_TYPE, new IngressIdentifier<>(Message.class, "foo", "bar"), ingressDefinition); ProtobufKafkaSourceProvider provider = new ProtobufKafkaSourceProvider(); SourceFunction<?> source = provider.forSpec(spec); assertThat(source, instanceOf(FlinkKafkaConsumer.class)); }
Example #19
Source File: MySourceProvider.java From stateful-functions with Apache License 2.0 | 5 votes |
@Override public <T> SourceFunction<T> forSpec(IngressSpec<T> ingressSpec) { MyIngressSpec<T> spec = asMyIngressSpec(ingressSpec); MySourceFunction source = new MySourceFunction(); // configure the source based on the provided spec return source; }
Example #20
Source File: FlinkKinesisConsumerTest.java From flink with Apache License 2.0 | 5 votes |
@Test @SuppressWarnings("unchecked") public void testFetcherShouldNotBeRestoringFromFailureIfNotRestoringFromCheckpoint() throws Exception { KinesisDataFetcher mockedFetcher = mockKinesisDataFetcher(); // assume the given config is correct PowerMockito.mockStatic(KinesisConfigUtil.class); PowerMockito.doNothing().when(KinesisConfigUtil.class); TestableFlinkKinesisConsumer consumer = new TestableFlinkKinesisConsumer( "fakeStream", new Properties(), 10, 2); consumer.open(new Configuration()); consumer.run(Mockito.mock(SourceFunction.SourceContext.class)); }
Example #21
Source File: AkkaSource.java From flink-learning with Apache License 2.0 | 5 votes |
@Override public void run(SourceFunction.SourceContext<Object> ctx) throws Exception { LOG.info("Starting the Receiver actor {}", actorName); receiverActor = receiverActorSystem.actorOf( Props.create(classForActor, ctx, urlOfPublisher, autoAck), actorName); LOG.info("Started the Receiver actor {} successfully", actorName); Await.result(receiverActorSystem.whenTerminated(), Duration.Inf()); }
Example #22
Source File: KafkaSourceProvider.java From stateful-functions with Apache License 2.0 | 5 votes |
@Override public <T> SourceFunction<T> forSpec(IngressSpec<T> ingressSpec) { KafkaIngressSpec<T> spec = asKafkaSpec(ingressSpec); Properties properties = new Properties(); properties.putAll(spec.properties()); properties.put("bootstrap.servers", spec.kafkaAddress()); return new FlinkKafkaConsumer<>(spec.topics(), deserializationSchemaFromSpec(spec), properties); }
Example #23
Source File: FlinkKinesisConsumer.java From flink with Apache License 2.0 | 5 votes |
/** This method is exposed for tests that need to mock the KinesisDataFetcher in the consumer. */ protected KinesisDataFetcher<T> createFetcher( List<String> streams, SourceFunction.SourceContext<T> sourceContext, RuntimeContext runtimeContext, Properties configProps, KinesisDeserializationSchema<T> deserializationSchema) { return new KinesisDataFetcher<>(streams, sourceContext, runtimeContext, configProps, deserializationSchema, shardAssigner, periodicWatermarkAssigner, watermarkTracker); }
Example #24
Source File: SourceFunctionUtil.java From Flink-CEPplus with Apache License 2.0 | 5 votes |
private static <T extends Serializable> List<T> runNonRichSourceFunction(SourceFunction<T> sourceFunction) { final List<T> outputs = new ArrayList<>(); try { SourceFunction.SourceContext<T> ctx = new CollectingSourceContext<T>(new Object(), outputs); sourceFunction.run(ctx); } catch (Exception e) { throw new RuntimeException("Cannot invoke source.", e); } return outputs; }
Example #25
Source File: KinesisSourceProvider.java From flink-statefun with Apache License 2.0 | 5 votes |
@Override public <T> SourceFunction<T> forSpec(IngressSpec<T> spec) { final KinesisIngressSpec<T> kinesisIngressSpec = asKinesisSpec(spec); return new FlinkKinesisConsumer<>( kinesisIngressSpec.streams(), deserializationSchemaFromSpec(kinesisIngressSpec), propertiesFromSpec(kinesisIngressSpec)); }
Example #26
Source File: KinesisDataFetcher.java From flink with Apache License 2.0 | 5 votes |
@VisibleForTesting protected KinesisDataFetcher(List<String> streams, SourceFunction.SourceContext<T> sourceContext, Object checkpointLock, RuntimeContext runtimeContext, Properties configProps, KinesisDeserializationSchema<T> deserializationSchema, KinesisShardAssigner shardAssigner, AssignerWithPeriodicWatermarks<T> periodicWatermarkAssigner, WatermarkTracker watermarkTracker, AtomicReference<Throwable> error, List<KinesisStreamShardState> subscribedShardsState, HashMap<String, String> subscribedStreamsToLastDiscoveredShardIds, FlinkKinesisProxyFactory kinesisProxyFactory) { this.streams = checkNotNull(streams); this.configProps = checkNotNull(configProps); this.sourceContext = checkNotNull(sourceContext); this.checkpointLock = checkNotNull(checkpointLock); this.runtimeContext = checkNotNull(runtimeContext); this.totalNumberOfConsumerSubtasks = runtimeContext.getNumberOfParallelSubtasks(); this.indexOfThisConsumerSubtask = runtimeContext.getIndexOfThisSubtask(); this.deserializationSchema = checkNotNull(deserializationSchema); this.shardAssigner = checkNotNull(shardAssigner); this.periodicWatermarkAssigner = periodicWatermarkAssigner; this.watermarkTracker = watermarkTracker; this.kinesisProxyFactory = checkNotNull(kinesisProxyFactory); this.kinesis = kinesisProxyFactory.create(configProps); this.consumerMetricGroup = runtimeContext.getMetricGroup() .addGroup(KinesisConsumerMetricConstants.KINESIS_CONSUMER_METRICS_GROUP); this.error = checkNotNull(error); this.subscribedShardsState = checkNotNull(subscribedShardsState); this.subscribedStreamsToLastDiscoveredShardIds = checkNotNull(subscribedStreamsToLastDiscoveredShardIds); this.shardConsumersExecutor = createShardConsumersThreadPool(runtimeContext.getTaskNameWithSubtasks()); this.recordEmitter = createRecordEmitter(configProps); }
Example #27
Source File: StreamSourceContexts.java From flink with Apache License 2.0 | 5 votes |
/** * Depending on the {@link TimeCharacteristic}, this method will return the adequate * {@link org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext}. That is: * <ul> * <li>{@link TimeCharacteristic#IngestionTime} = {@code AutomaticWatermarkContext}</li> * <li>{@link TimeCharacteristic#ProcessingTime} = {@code NonTimestampContext}</li> * <li>{@link TimeCharacteristic#EventTime} = {@code ManualWatermarkContext}</li> * </ul> * */ public static <OUT> SourceFunction.SourceContext<OUT> getSourceContext( TimeCharacteristic timeCharacteristic, ProcessingTimeService processingTimeService, Object checkpointLock, StreamStatusMaintainer streamStatusMaintainer, Output<StreamRecord<OUT>> output, long watermarkInterval, long idleTimeout) { final SourceFunction.SourceContext<OUT> ctx; switch (timeCharacteristic) { case EventTime: ctx = new ManualWatermarkContext<>( output, processingTimeService, checkpointLock, streamStatusMaintainer, idleTimeout); break; case IngestionTime: ctx = new AutomaticWatermarkContext<>( output, watermarkInterval, processingTimeService, checkpointLock, streamStatusMaintainer, idleTimeout); break; case ProcessingTime: ctx = new NonTimestampContext<>(checkpointLock, output); break; default: throw new IllegalArgumentException(String.valueOf(timeCharacteristic)); } return ctx; }
Example #28
Source File: SourceSinkModule.java From stateful-functions with Apache License 2.0 | 5 votes |
@Override public <T> SourceFunction<T> forSpec(IngressSpec<T> spec) { if (!(spec instanceof SourceFunctionSpec)) { throw new IllegalStateException("spec " + spec + " is not of type SourceFunctionSpec"); } SourceFunctionSpec<T> casted = (SourceFunctionSpec<T>) spec; return casted.delegate(); }
Example #29
Source File: EventTimeWindowCheckpointingITCase.java From Flink-CEPplus with Apache License 2.0 | 5 votes |
@Override public void emitEvent(SourceFunction.SourceContext<Tuple2<Long, IntType>> ctx, int eventSequenceNo) { final IntType intTypeNext = new IntType(eventSequenceNo); for (long i = 0; i < keyUniverseSize; i++) { final Tuple2<Long, IntType> generatedEvent = new Tuple2<>(i, intTypeNext); ctx.collectWithTimestamp(generatedEvent, eventSequenceNo); } ctx.emitWatermark(new Watermark(eventSequenceNo - watermarkTrailing)); }
Example #30
Source File: SourceFunctionToWatermark.java From flink-simple-tutorial with Apache License 2.0 | 5 votes |
public static void main(String[] args) throws Exception { final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); // 添加数组作为数据输入源 String[] elementInput = new String[]{"hello Flink, 17788900", "Second Line, 17788923"}; DataStream<String> text = env.addSource(new SourceFunction<String>() { @Override public void run(SourceContext<String> ctx) throws Exception { for (String s : elementInput) { // 切割每一条数据 String[] inp = s.split(","); Long timestamp = new Long(inp[1]); // 生成 event time 时间戳 ctx.collectWithTimestamp(s, timestamp); // 调用 emitWatermark() 方法生成 watermark, 最大延迟设定为 2 ctx.emitWatermark(new Watermark(timestamp - 2)); } // 设定默认 watermark ctx.emitWatermark(new Watermark(Long.MAX_VALUE)); } @Override public void cancel() { } }); text.print(); env.execute(); }