Java Code Examples for org.apache.flink.streaming.connectors.kafka.config.StartupMode#TIMESTAMP
The following examples show how to use
org.apache.flink.streaming.connectors.kafka.config.StartupMode#TIMESTAMP .
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: Kafka.java From flink with Apache License 2.0 | 5 votes |
/** * Configures to start reading from partition offsets of the specified timestamp. * * @param startTimestampMillis timestamp to start reading from * @see FlinkKafkaConsumerBase#setStartFromTimestamp(long) */ public Kafka startFromTimestamp(long startTimestampMillis) { this.startupMode = StartupMode.TIMESTAMP; this.specificOffsets = null; this.startTimestampMillis = startTimestampMillis; return this; }
Example 2
Source File: KafkaTableSourceSinkFactoryBase.java From flink with Apache License 2.0 | 5 votes |
private StartupOptions getStartupOptions( DescriptorProperties descriptorProperties, String topic) { final Map<KafkaTopicPartition, Long> specificOffsets = new HashMap<>(); final StartupMode startupMode = descriptorProperties .getOptionalString(CONNECTOR_STARTUP_MODE) .map(modeString -> { switch (modeString) { case KafkaValidator.CONNECTOR_STARTUP_MODE_VALUE_EARLIEST: return StartupMode.EARLIEST; case KafkaValidator.CONNECTOR_STARTUP_MODE_VALUE_LATEST: return StartupMode.LATEST; case KafkaValidator.CONNECTOR_STARTUP_MODE_VALUE_GROUP_OFFSETS: return StartupMode.GROUP_OFFSETS; case KafkaValidator.CONNECTOR_STARTUP_MODE_VALUE_SPECIFIC_OFFSETS: buildSpecificOffsets(descriptorProperties, topic, specificOffsets); return StartupMode.SPECIFIC_OFFSETS; case KafkaValidator.CONNECTOR_STARTUP_MODE_VALUE_TIMESTAMP: return StartupMode.TIMESTAMP; default: throw new TableException("Unsupported startup mode. Validator should have checked that."); } }).orElse(StartupMode.GROUP_OFFSETS); final StartupOptions options = new StartupOptions(); options.startupMode = startupMode; options.specificOffsets = specificOffsets; if (startupMode == StartupMode.TIMESTAMP) { options.startupTimestampMillis = descriptorProperties.getLong(CONNECTOR_STARTUP_TIMESTAMP_MILLIS); } return options; }
Example 3
Source File: KafkaOptions.java From flink with Apache License 2.0 | 5 votes |
public static StartupOptions getStartupOptions( ReadableConfig tableOptions, String topic) { final Map<KafkaTopicPartition, Long> specificOffsets = new HashMap<>(); final StartupMode startupMode = tableOptions.getOptional(SCAN_STARTUP_MODE) .map(modeString -> { switch (modeString) { case SCAN_STARTUP_MODE_VALUE_EARLIEST: return StartupMode.EARLIEST; case SCAN_STARTUP_MODE_VALUE_LATEST: return StartupMode.LATEST; case SCAN_STARTUP_MODE_VALUE_GROUP_OFFSETS: return StartupMode.GROUP_OFFSETS; case SCAN_STARTUP_MODE_VALUE_SPECIFIC_OFFSETS: buildSpecificOffsets(tableOptions, topic, specificOffsets); return StartupMode.SPECIFIC_OFFSETS; case SCAN_STARTUP_MODE_VALUE_TIMESTAMP: return StartupMode.TIMESTAMP; default: throw new TableException("Unsupported startup mode. Validator should have checked that."); } }).orElse(StartupMode.GROUP_OFFSETS); final StartupOptions options = new StartupOptions(); options.startupMode = startupMode; options.specificOffsets = specificOffsets; if (startupMode == StartupMode.TIMESTAMP) { options.startupTimestampMillis = tableOptions.get(SCAN_STARTUP_TIMESTAMP_MILLIS); } return options; }
Example 4
Source File: FlinkKafkaConsumerBase.java From Flink-CEPplus with Apache License 2.0 | 4 votes |
/** * Specifies the consumer to start reading partitions from a specified timestamp. * The specified timestamp must be before the current timestamp. * This lets the consumer ignore any committed group offsets in Zookeeper / Kafka brokers. * * <p>The consumer will look up the earliest offset whose timestamp is greater than or equal * to the specific timestamp from Kafka. If there's no such offset, the consumer will use the * latest offset to read data from kafka. * * <p>This method does not affect where partitions are read from when the consumer is restored * from a checkpoint or savepoint. When the consumer is restored from a checkpoint or * savepoint, only the offsets in the restored state will be used. * * @param startupOffsetsTimestamp timestamp for the startup offsets, as milliseconds from epoch. * * @return The consumer object, to allow function chaining. */ // NOTE - // This method is implemented in the base class because this is where the startup logging and verifications live. // However, it is not publicly exposed since only newer Kafka versions support the functionality. // Version-specific subclasses which can expose the functionality should override and allow public access. protected FlinkKafkaConsumerBase<T> setStartFromTimestamp(long startupOffsetsTimestamp) { checkArgument(startupOffsetsTimestamp >= 0, "The provided value for the startup offsets timestamp is invalid."); long currentTimestamp = System.currentTimeMillis(); checkArgument(startupOffsetsTimestamp <= currentTimestamp, "Startup time[%s] must be before current time[%s].", startupOffsetsTimestamp, currentTimestamp); this.startupMode = StartupMode.TIMESTAMP; this.startupOffsetsTimestamp = startupOffsetsTimestamp; this.specificStartupOffsets = null; return this; }
Example 5
Source File: FlinkKafkaConsumerBase.java From flink with Apache License 2.0 | 4 votes |
/** * Specifies the consumer to start reading partitions from a specified timestamp. * The specified timestamp must be before the current timestamp. * This lets the consumer ignore any committed group offsets in Zookeeper / Kafka brokers. * * <p>The consumer will look up the earliest offset whose timestamp is greater than or equal * to the specific timestamp from Kafka. If there's no such offset, the consumer will use the * latest offset to read data from kafka. * * <p>This method does not affect where partitions are read from when the consumer is restored * from a checkpoint or savepoint. When the consumer is restored from a checkpoint or * savepoint, only the offsets in the restored state will be used. * * @param startupOffsetsTimestamp timestamp for the startup offsets, as milliseconds from epoch. * * @return The consumer object, to allow function chaining. */ // NOTE - // This method is implemented in the base class because this is where the startup logging and verifications live. // However, it is not publicly exposed since only newer Kafka versions support the functionality. // Version-specific subclasses which can expose the functionality should override and allow public access. protected FlinkKafkaConsumerBase<T> setStartFromTimestamp(long startupOffsetsTimestamp) { checkArgument(startupOffsetsTimestamp >= 0, "The provided value for the startup offsets timestamp is invalid."); long currentTimestamp = System.currentTimeMillis(); checkArgument(startupOffsetsTimestamp <= currentTimestamp, "Startup time[%s] must be before current time[%s].", startupOffsetsTimestamp, currentTimestamp); this.startupMode = StartupMode.TIMESTAMP; this.startupOffsetsTimestamp = startupOffsetsTimestamp; this.specificStartupOffsets = null; return this; }
Example 6
Source File: FlinkKafkaConsumerBase.java From flink with Apache License 2.0 | 3 votes |
/** * Specifies the consumer to start reading partitions from a specified timestamp. * The specified timestamp must be before the current timestamp. * This lets the consumer ignore any committed group offsets in Zookeeper / Kafka brokers. * * <p>The consumer will look up the earliest offset whose timestamp is greater than or equal * to the specific timestamp from Kafka. If there's no such offset, the consumer will use the * latest offset to read data from kafka. * * <p>This method does not affect where partitions are read from when the consumer is restored * from a checkpoint or savepoint. When the consumer is restored from a checkpoint or * savepoint, only the offsets in the restored state will be used. * * @param startupOffsetsTimestamp timestamp for the startup offsets, as milliseconds from epoch. * * @return The consumer object, to allow function chaining. */ public FlinkKafkaConsumerBase<T> setStartFromTimestamp(long startupOffsetsTimestamp) { checkArgument(startupOffsetsTimestamp >= 0, "The provided value for the startup offsets timestamp is invalid."); long currentTimestamp = System.currentTimeMillis(); checkArgument(startupOffsetsTimestamp <= currentTimestamp, "Startup time[%s] must be before current time[%s].", startupOffsetsTimestamp, currentTimestamp); this.startupMode = StartupMode.TIMESTAMP; this.startupOffsetsTimestamp = startupOffsetsTimestamp; this.specificStartupOffsets = null; return this; }