org.apache.flink.streaming.connectors.kafka.internals.KafkaCommitCallback Java Examples

The following examples show how to use org.apache.flink.streaming.connectors.kafka.internals.KafkaCommitCallback. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example #1
Source File: KafkaConsumerThread.java    From flink with Apache License 2.0 6 votes vote down vote up
/**
 * Tells this thread to commit a set of offsets. This method does not block, the committing
 * operation will happen asynchronously.
 *
 * <p>Only one commit operation may be pending at any time. If the committing takes longer than
 * the frequency with which this method is called, then some commits may be skipped due to being
 * superseded by newer ones.
 *
 * @param offsetsToCommit The offsets to commit
 * @param commitCallback callback when Kafka commit completes
 */
void setOffsetsToCommit(
		Map<TopicPartition, OffsetAndMetadata> offsetsToCommit,
		@Nonnull KafkaCommitCallback commitCallback) {

	// record the work to be committed by the main consumer thread and make sure the consumer notices that
	if (nextOffsetsToCommit.getAndSet(Tuple2.of(offsetsToCommit, commitCallback)) != null) {
		log.warn("Committing offsets to Kafka takes longer than the checkpoint interval. " +
				"Skipping commit of previous offsets because newer complete checkpoint offsets are available. " +
				"This does not compromise Flink's checkpoint integrity.");
	}

	// if the consumer is blocked in a poll() or handover operation, wake it up to commit soon
	handover.wakeupProducer();

	synchronized (consumerReassignmentLock) {
		if (consumer != null) {
			consumer.wakeup();
		} else {
			// the consumer is currently isolated for partition reassignment;
			// set this flag so that the wakeup state is restored once the reassignment is complete
			hasBufferedWakeup = true;
		}
	}
}
 
Example #2
Source File: KafkaConsumerThread.java    From Flink-CEPplus with Apache License 2.0 6 votes vote down vote up
/**
 * Tells this thread to commit a set of offsets. This method does not block, the committing
 * operation will happen asynchronously.
 *
 * <p>Only one commit operation may be pending at any time. If the committing takes longer than
 * the frequency with which this method is called, then some commits may be skipped due to being
 * superseded by newer ones.
 *
 * @param offsetsToCommit The offsets to commit
 * @param commitCallback callback when Kafka commit completes
 */
void setOffsetsToCommit(
		Map<TopicPartition, OffsetAndMetadata> offsetsToCommit,
		@Nonnull KafkaCommitCallback commitCallback) {

	// record the work to be committed by the main consumer thread and make sure the consumer notices that
	if (nextOffsetsToCommit.getAndSet(Tuple2.of(offsetsToCommit, commitCallback)) != null) {
		log.warn("Committing offsets to Kafka takes longer than the checkpoint interval. " +
				"Skipping commit of previous offsets because newer complete checkpoint offsets are available. " +
				"This does not compromise Flink's checkpoint integrity.");
	}

	// if the consumer is blocked in a poll() or handover operation, wake it up to commit soon
	handover.wakeupProducer();

	synchronized (consumerReassignmentLock) {
		if (consumer != null) {
			consumer.wakeup();
		} else {
			// the consumer is currently isolated for partition reassignment;
			// set this flag so that the wakeup state is restored once the reassignment is complete
			hasBufferedWakeup = true;
		}
	}
}
 
Example #3
Source File: KafkaConsumerThread.java    From Flink-CEPplus with Apache License 2.0 6 votes vote down vote up
/**
 * Tells this thread to commit a set of offsets. This method does not block, the committing
 * operation will happen asynchronously.
 *
 * <p>Only one commit operation may be pending at any time. If the committing takes longer than
 * the frequency with which this method is called, then some commits may be skipped due to being
 * superseded by newer ones.
 *
 * @param offsetsToCommit The offsets to commit
 * @param commitCallback callback when Kafka commit completes
 */
void setOffsetsToCommit(
		Map<TopicPartition, OffsetAndMetadata> offsetsToCommit,
		@Nonnull KafkaCommitCallback commitCallback) {

	// record the work to be committed by the main consumer thread and make sure the consumer notices that
	if (nextOffsetsToCommit.getAndSet(Tuple2.of(offsetsToCommit, commitCallback)) != null) {
		log.warn("Committing offsets to Kafka takes longer than the checkpoint interval. " +
				"Skipping commit of previous offsets because newer complete checkpoint offsets are available. " +
				"This does not compromise Flink's checkpoint integrity.");
	}

	// if the consumer is blocked in a poll() or handover operation, wake it up to commit soon
	handover.wakeupProducer();

	synchronized (consumerReassignmentLock) {
		if (consumer != null) {
			consumer.wakeup();
		} else {
			// the consumer is currently isolated for partition reassignment;
			// set this flag so that the wakeup state is restored once the reassignment is complete
			hasBufferedWakeup = true;
		}
	}
}
 
Example #4
Source File: KafkaConsumerThread.java    From flink with Apache License 2.0 6 votes vote down vote up
/**
 * Tells this thread to commit a set of offsets. This method does not block, the committing
 * operation will happen asynchronously.
 *
 * <p>Only one commit operation may be pending at any time. If the committing takes longer than
 * the frequency with which this method is called, then some commits may be skipped due to being
 * superseded by newer ones.
 *
 * @param offsetsToCommit The offsets to commit
 * @param commitCallback callback when Kafka commit completes
 */
void setOffsetsToCommit(
		Map<TopicPartition, OffsetAndMetadata> offsetsToCommit,
		@Nonnull KafkaCommitCallback commitCallback) {

	// record the work to be committed by the main consumer thread and make sure the consumer notices that
	if (nextOffsetsToCommit.getAndSet(Tuple2.of(offsetsToCommit, commitCallback)) != null) {
		log.warn("Committing offsets to Kafka takes longer than the checkpoint interval. " +
				"Skipping commit of previous offsets because newer complete checkpoint offsets are available. " +
				"This does not compromise Flink's checkpoint integrity.");
	}

	// if the consumer is blocked in a poll() or handover operation, wake it up to commit soon
	handover.wakeupProducer();

	synchronized (consumerReassignmentLock) {
		if (consumer != null) {
			consumer.wakeup();
		} else {
			// the consumer is currently isolated for partition reassignment;
			// set this flag so that the wakeup state is restored once the reassignment is complete
			hasBufferedWakeup = true;
		}
	}
}
 
Example #5
Source File: KafkaConsumerThread.java    From flink with Apache License 2.0 6 votes vote down vote up
/**
 * Tells this thread to commit a set of offsets. This method does not block, the committing
 * operation will happen asynchronously.
 *
 * <p>Only one commit operation may be pending at any time. If the committing takes longer than
 * the frequency with which this method is called, then some commits may be skipped due to being
 * superseded by newer ones.
 *
 * @param offsetsToCommit The offsets to commit
 * @param commitCallback callback when Kafka commit completes
 */
void setOffsetsToCommit(
		Map<TopicPartition, OffsetAndMetadata> offsetsToCommit,
		@Nonnull KafkaCommitCallback commitCallback) {

	// record the work to be committed by the main consumer thread and make sure the consumer notices that
	if (nextOffsetsToCommit.getAndSet(Tuple2.of(offsetsToCommit, commitCallback)) != null) {
		log.warn("Committing offsets to Kafka takes longer than the checkpoint interval. " +
				"Skipping commit of previous offsets because newer complete checkpoint offsets are available. " +
				"This does not compromise Flink's checkpoint integrity.");
	}

	// if the consumer is blocked in a poll() or handover operation, wake it up to commit soon
	handover.wakeupProducer();

	synchronized (consumerReassignmentLock) {
		if (consumer != null) {
			consumer.wakeup();
		} else {
			// the consumer is currently isolated for partition reassignment;
			// set this flag so that the wakeup state is restored once the reassignment is complete
			hasBufferedWakeup = true;
		}
	}
}
 
Example #6
Source File: Kafka010Fetcher.java    From flink with Apache License 2.0 6 votes vote down vote up
@Override
protected void doCommitInternalOffsetsToKafka(
		Map<KafkaTopicPartition, Long> offsets,
		@Nonnull KafkaCommitCallback commitCallback) throws Exception {

	List<KafkaTopicPartitionState<T, TopicPartition>> partitions = subscribedPartitionStates();

	Map<TopicPartition, OffsetAndMetadata> offsetsToCommit = new HashMap<>(partitions.size());

	for (KafkaTopicPartitionState<T, TopicPartition> partition : partitions) {
		Long lastProcessedOffset = offsets.get(partition.getKafkaTopicPartition());
		if (lastProcessedOffset != null) {
			checkState(lastProcessedOffset >= 0, "Illegal offset value to commit");

			// committed offsets through the KafkaConsumer need to be 1 more than the last processed offset.
			// This does not affect Flink's checkpoints/saved state.
			long offsetToCommit = lastProcessedOffset + 1;

			offsetsToCommit.put(partition.getKafkaPartitionHandle(), new OffsetAndMetadata(offsetToCommit));
			partition.setCommittedOffset(offsetToCommit);
		}
	}

	// record the work to be committed by the main consumer thread and make sure the consumer notices that
	consumerThread.setOffsetsToCommit(offsetsToCommit, commitCallback);
}
 
Example #7
Source File: KafkaConsumerThread.java    From flink with Apache License 2.0 6 votes vote down vote up
/**
 * Tells this thread to commit a set of offsets. This method does not block, the committing
 * operation will happen asynchronously.
 *
 * <p>Only one commit operation may be pending at any time. If the committing takes longer than
 * the frequency with which this method is called, then some commits may be skipped due to being
 * superseded by newer ones.
 *
 * @param offsetsToCommit The offsets to commit
 * @param commitCallback callback when Kafka commit completes
 */
void setOffsetsToCommit(
		Map<TopicPartition, OffsetAndMetadata> offsetsToCommit,
		@Nonnull KafkaCommitCallback commitCallback) {

	// record the work to be committed by the main consumer thread and make sure the consumer notices that
	if (nextOffsetsToCommit.getAndSet(Tuple2.of(offsetsToCommit, commitCallback)) != null) {
		log.warn("Committing offsets to Kafka takes longer than the checkpoint interval. " +
				"Skipping commit of previous offsets because newer complete checkpoint offsets are available. " +
				"This does not compromise Flink's checkpoint integrity.");
	}

	// if the consumer is blocked in a poll() or handover operation, wake it up to commit soon
	handover.wakeupProducer();

	synchronized (consumerReassignmentLock) {
		if (consumer != null) {
			consumer.wakeup();
		} else {
			// the consumer is currently isolated for partition reassignment;
			// set this flag so that the wakeup state is restored once the reassignment is complete
			hasBufferedWakeup = true;
		}
	}
}
 
Example #8
Source File: FlinkKafkaConsumerBaseTest.java    From flink with Apache License 2.0 5 votes vote down vote up
@Override
protected void doCommitInternalOffsetsToKafka(
		Map<KafkaTopicPartition, Long> offsets,
		@Nonnull KafkaCommitCallback commitCallback) throws Exception {
	this.lastCommittedOffsets = offsets;
	this.commitCount++;
	commitCallback.onSuccess();
}
 
Example #9
Source File: KafkaFetcher.java    From flink with Apache License 2.0 5 votes vote down vote up
@Override
protected void doCommitInternalOffsetsToKafka(
	Map<KafkaTopicPartition, Long> offsets,
	@Nonnull KafkaCommitCallback commitCallback) throws Exception {

	@SuppressWarnings("unchecked")
	List<KafkaTopicPartitionState<TopicPartition>> partitions = subscribedPartitionStates();

	Map<TopicPartition, OffsetAndMetadata> offsetsToCommit = new HashMap<>(partitions.size());

	for (KafkaTopicPartitionState<TopicPartition> partition : partitions) {
		Long lastProcessedOffset = offsets.get(partition.getKafkaTopicPartition());
		if (lastProcessedOffset != null) {
			checkState(lastProcessedOffset >= 0, "Illegal offset value to commit");

			// committed offsets through the KafkaConsumer need to be 1 more than the last processed offset.
			// This does not affect Flink's checkpoints/saved state.
			long offsetToCommit = lastProcessedOffset + 1;

			offsetsToCommit.put(partition.getKafkaPartitionHandle(), new OffsetAndMetadata(offsetToCommit));
			partition.setCommittedOffset(offsetToCommit);
		}
	}

	// record the work to be committed by the main consumer thread and make sure the consumer notices that
	consumerThread.setOffsetsToCommit(offsetsToCommit, commitCallback);
}
 
Example #10
Source File: KafkaFetcher.java    From flink with Apache License 2.0 5 votes vote down vote up
@Override
protected void doCommitInternalOffsetsToKafka(
	Map<KafkaTopicPartition, Long> offsets,
	@Nonnull KafkaCommitCallback commitCallback) throws Exception {

	@SuppressWarnings("unchecked")
	List<KafkaTopicPartitionState<T, TopicPartition>> partitions = subscribedPartitionStates();

	Map<TopicPartition, OffsetAndMetadata> offsetsToCommit = new HashMap<>(partitions.size());

	for (KafkaTopicPartitionState<T, TopicPartition> partition : partitions) {
		Long lastProcessedOffset = offsets.get(partition.getKafkaTopicPartition());
		if (lastProcessedOffset != null) {
			checkState(lastProcessedOffset >= 0, "Illegal offset value to commit");

			// committed offsets through the KafkaConsumer need to be 1 more than the last processed offset.
			// This does not affect Flink's checkpoints/saved state.
			long offsetToCommit = lastProcessedOffset + 1;

			offsetsToCommit.put(partition.getKafkaPartitionHandle(), new OffsetAndMetadata(offsetToCommit));
			partition.setCommittedOffset(offsetToCommit);
		}
	}

	// record the work to be committed by the main consumer thread and make sure the consumer notices that
	consumerThread.setOffsetsToCommit(offsetsToCommit, commitCallback);
}
 
Example #11
Source File: Kafka09Fetcher.java    From flink with Apache License 2.0 5 votes vote down vote up
@Override
protected void doCommitInternalOffsetsToKafka(
		Map<KafkaTopicPartition, Long> offsets,
		@Nonnull KafkaCommitCallback commitCallback) throws Exception {

	@SuppressWarnings("unchecked")
	List<KafkaTopicPartitionState<TopicPartition>> partitions = subscribedPartitionStates();

	Map<TopicPartition, OffsetAndMetadata> offsetsToCommit = new HashMap<>(partitions.size());

	for (KafkaTopicPartitionState<TopicPartition> partition : partitions) {
		Long lastProcessedOffset = offsets.get(partition.getKafkaTopicPartition());
		if (lastProcessedOffset != null) {
			checkState(lastProcessedOffset >= 0, "Illegal offset value to commit");

			// committed offsets through the KafkaConsumer need to be 1 more than the last processed offset.
			// This does not affect Flink's checkpoints/saved state.
			long offsetToCommit = lastProcessedOffset + 1;

			offsetsToCommit.put(partition.getKafkaPartitionHandle(), new OffsetAndMetadata(offsetToCommit));
			partition.setCommittedOffset(offsetToCommit);
		}
	}

	// record the work to be committed by the main consumer thread and make sure the consumer notices that
	consumerThread.setOffsetsToCommit(offsetsToCommit, commitCallback);
}
 
Example #12
Source File: FlinkKafkaConsumerBaseTest.java    From Flink-CEPplus with Apache License 2.0 5 votes vote down vote up
@Override
protected void doCommitInternalOffsetsToKafka(
		Map<KafkaTopicPartition, Long> offsets,
		@Nonnull KafkaCommitCallback commitCallback) throws Exception {
	this.lastCommittedOffsets = offsets;
	this.commitCount++;
	commitCallback.onSuccess();
}
 
Example #13
Source File: KafkaFetcher.java    From Flink-CEPplus with Apache License 2.0 5 votes vote down vote up
@Override
protected void doCommitInternalOffsetsToKafka(
	Map<KafkaTopicPartition, Long> offsets,
	@Nonnull KafkaCommitCallback commitCallback) throws Exception {

	@SuppressWarnings("unchecked")
	List<KafkaTopicPartitionState<TopicPartition>> partitions = subscribedPartitionStates();

	Map<TopicPartition, OffsetAndMetadata> offsetsToCommit = new HashMap<>(partitions.size());

	for (KafkaTopicPartitionState<TopicPartition> partition : partitions) {
		Long lastProcessedOffset = offsets.get(partition.getKafkaTopicPartition());
		if (lastProcessedOffset != null) {
			checkState(lastProcessedOffset >= 0, "Illegal offset value to commit");

			// committed offsets through the KafkaConsumer need to be 1 more than the last processed offset.
			// This does not affect Flink's checkpoints/saved state.
			long offsetToCommit = lastProcessedOffset + 1;

			offsetsToCommit.put(partition.getKafkaPartitionHandle(), new OffsetAndMetadata(offsetToCommit));
			partition.setCommittedOffset(offsetToCommit);
		}
	}

	// record the work to be committed by the main consumer thread and make sure the consumer notices that
	consumerThread.setOffsetsToCommit(offsetsToCommit, commitCallback);
}
 
Example #14
Source File: Kafka09Fetcher.java    From Flink-CEPplus with Apache License 2.0 5 votes vote down vote up
@Override
protected void doCommitInternalOffsetsToKafka(
		Map<KafkaTopicPartition, Long> offsets,
		@Nonnull KafkaCommitCallback commitCallback) throws Exception {

	@SuppressWarnings("unchecked")
	List<KafkaTopicPartitionState<TopicPartition>> partitions = subscribedPartitionStates();

	Map<TopicPartition, OffsetAndMetadata> offsetsToCommit = new HashMap<>(partitions.size());

	for (KafkaTopicPartitionState<TopicPartition> partition : partitions) {
		Long lastProcessedOffset = offsets.get(partition.getKafkaTopicPartition());
		if (lastProcessedOffset != null) {
			checkState(lastProcessedOffset >= 0, "Illegal offset value to commit");

			// committed offsets through the KafkaConsumer need to be 1 more than the last processed offset.
			// This does not affect Flink's checkpoints/saved state.
			long offsetToCommit = lastProcessedOffset + 1;

			offsetsToCommit.put(partition.getKafkaPartitionHandle(), new OffsetAndMetadata(offsetToCommit));
			partition.setCommittedOffset(offsetToCommit);
		}
	}

	// record the work to be committed by the main consumer thread and make sure the consumer notices that
	consumerThread.setOffsetsToCommit(offsetsToCommit, commitCallback);
}
 
Example #15
Source File: FlinkKafkaConsumerBaseTest.java    From flink with Apache License 2.0 5 votes vote down vote up
@Override
protected void doCommitInternalOffsetsToKafka(
		Map<KafkaTopicPartition, Long> offsets,
		@Nonnull KafkaCommitCallback commitCallback) throws Exception {
	this.lastCommittedOffsets = offsets;
	this.commitCount++;
	commitCallback.onSuccess();
}
 
Example #16
Source File: KafkaConsumerThread.java    From flink with Apache License 2.0 4 votes vote down vote up
CommitCallback(KafkaCommitCallback internalCommitCallback) {
	this.internalCommitCallback = checkNotNull(internalCommitCallback);
}
 
Example #17
Source File: KafkaConsumerThreadTest.java    From flink with Apache License 2.0 4 votes vote down vote up
/**
 * Tests reassignment works correctly in the case when:
 *  - the consumer has no initial assignments
 *  - new unassigned partitions have undefined offsets
 *  - the consumer was woken up prior to the reassignment
 *
 * <p>In this case, reassignment should not have occurred at all, and the consumer retains the original assignment.
 *
 * <p>Setting a timeout because the test will not finish if there is logic error with
 * the reassignment flow.
 */
@SuppressWarnings("unchecked")
@Test(timeout = 10000)
public void testReassignPartitionsDefinedOffsetsWithoutInitialAssignmentsWhenEarlyWakeup() throws Exception {
	final String testTopic = "test-topic";

	// -------- new partitions with defined offsets --------

	KafkaTopicPartitionState<Object, TopicPartition> newPartition1 = new KafkaTopicPartitionState<>(
		new KafkaTopicPartition(testTopic, 0), new TopicPartition(testTopic, 0));
	newPartition1.setOffset(KafkaTopicPartitionStateSentinel.EARLIEST_OFFSET);

	KafkaTopicPartitionState<Object, TopicPartition> newPartition2 = new KafkaTopicPartitionState<>(
		new KafkaTopicPartition(testTopic, 1), new TopicPartition(testTopic, 1));
	newPartition2.setOffset(KafkaTopicPartitionStateSentinel.EARLIEST_OFFSET);

	List<KafkaTopicPartitionState<Object, TopicPartition>> newPartitions = new ArrayList<>(2);
	newPartitions.add(newPartition1);
	newPartitions.add(newPartition2);

	// -------- setup mock KafkaConsumer --------

	// no initial assignments
	final Map<TopicPartition, Long> mockConsumerAssignmentsAndPositions = new LinkedHashMap<>();

	// mock retrieved values that should replace the EARLIEST_OFFSET sentinels
	final Map<TopicPartition, Long> mockRetrievedPositions = new HashMap<>();
	mockRetrievedPositions.put(newPartition1.getKafkaPartitionHandle(), 23L);
	mockRetrievedPositions.put(newPartition2.getKafkaPartitionHandle(), 32L);

	final TestConsumer mockConsumer = createMockConsumer(
			mockConsumerAssignmentsAndPositions,
			mockRetrievedPositions,
			true,
			null,
			null);

	// -------- setup new partitions to be polled from the unassigned partitions queue --------

	final ClosableBlockingQueue<KafkaTopicPartitionState<Object, TopicPartition>> unassignedPartitionsQueue =
		new ClosableBlockingQueue<>();

	for (KafkaTopicPartitionState<Object, TopicPartition> newPartition : newPartitions) {
		unassignedPartitionsQueue.add(newPartition);
	}

	// -------- start test --------

	final TestKafkaConsumerThread testThread =
		new TestKafkaConsumerThread(mockConsumer, unassignedPartitionsQueue, new Handover());
	testThread.start();

	// pause just before the reassignment so we can inject the wakeup
	testThread.waitPartitionReassignmentInvoked();

	testThread.setOffsetsToCommit(new HashMap<TopicPartition, OffsetAndMetadata>(), mock(KafkaCommitCallback.class));

	// make sure the consumer was actually woken up
	assertEquals(1, mockConsumer.getNumWakeupCalls());

	testThread.startPartitionReassignment();
	testThread.waitPartitionReassignmentComplete();

	// the consumer's assignment should have remained untouched (in this case, empty)
	assertEquals(0, mockConsumerAssignmentsAndPositions.size());

	// the new partitions should have been re-added to the unassigned partitions queue
	assertEquals(2, unassignedPartitionsQueue.size());
}
 
Example #18
Source File: FlinkKafkaConsumerBase.java    From flink with Apache License 2.0 4 votes vote down vote up
@Override
public void run(SourceContext<T> sourceContext) throws Exception {
	if (subscribedPartitionsToStartOffsets == null) {
		throw new Exception("The partitions were not set for the consumer");
	}

	// initialize commit metrics and default offset callback method
	this.successfulCommits = this.getRuntimeContext().getMetricGroup().counter(COMMITS_SUCCEEDED_METRICS_COUNTER);
	this.failedCommits =  this.getRuntimeContext().getMetricGroup().counter(COMMITS_FAILED_METRICS_COUNTER);
	final int subtaskIndex = this.getRuntimeContext().getIndexOfThisSubtask();

	this.offsetCommitCallback = new KafkaCommitCallback() {
		@Override
		public void onSuccess() {
			successfulCommits.inc();
		}

		@Override
		public void onException(Throwable cause) {
			LOG.warn(String.format("Consumer subtask %d failed async Kafka commit.", subtaskIndex), cause);
			failedCommits.inc();
		}
	};

	// mark the subtask as temporarily idle if there are no initial seed partitions;
	// once this subtask discovers some partitions and starts collecting records, the subtask's
	// status will automatically be triggered back to be active.
	if (subscribedPartitionsToStartOffsets.isEmpty()) {
		sourceContext.markAsTemporarilyIdle();
	}

	LOG.info("Consumer subtask {} creating fetcher with offsets {}.",
		getRuntimeContext().getIndexOfThisSubtask(), subscribedPartitionsToStartOffsets);
	// from this point forward:
	//   - 'snapshotState' will draw offsets from the fetcher,
	//     instead of being built from `subscribedPartitionsToStartOffsets`
	//   - 'notifyCheckpointComplete' will start to do work (i.e. commit offsets to
	//     Kafka through the fetcher, if configured to do so)
	this.kafkaFetcher = createFetcher(
			sourceContext,
			subscribedPartitionsToStartOffsets,
			watermarkStrategy,
			(StreamingRuntimeContext) getRuntimeContext(),
			offsetCommitMode,
			getRuntimeContext().getMetricGroup().addGroup(KAFKA_CONSUMER_METRICS_GROUP),
			useMetrics);

	if (!running) {
		return;
	}

	// depending on whether we were restored with the current state version (1.3),
	// remaining logic branches off into 2 paths:
	//  1) New state - partition discovery loop executed as separate thread, with this
	//                 thread running the main fetcher loop
	//  2) Old state - partition discovery is disabled and only the main fetcher loop is executed
	if (discoveryIntervalMillis == PARTITION_DISCOVERY_DISABLED) {
		kafkaFetcher.runFetchLoop();
	} else {
		runWithPartitionDiscovery();
	}
}
 
Example #19
Source File: KafkaConsumerThread.java    From flink with Apache License 2.0 4 votes vote down vote up
CommitCallback(KafkaCommitCallback internalCommitCallback) {
	this.internalCommitCallback = checkNotNull(internalCommitCallback);
}
 
Example #20
Source File: KafkaConsumerThread.java    From flink with Apache License 2.0 4 votes vote down vote up
CommitCallback(KafkaCommitCallback internalCommitCallback) {
	this.internalCommitCallback = checkNotNull(internalCommitCallback);
}
 
Example #21
Source File: FlinkKafkaConsumerBase.java    From flink with Apache License 2.0 4 votes vote down vote up
@Override
public void run(SourceContext<T> sourceContext) throws Exception {
	if (subscribedPartitionsToStartOffsets == null) {
		throw new Exception("The partitions were not set for the consumer");
	}

	// initialize commit metrics and default offset callback method
	this.successfulCommits = this.getRuntimeContext().getMetricGroup().counter(COMMITS_SUCCEEDED_METRICS_COUNTER);
	this.failedCommits =  this.getRuntimeContext().getMetricGroup().counter(COMMITS_FAILED_METRICS_COUNTER);
	final int subtaskIndex = this.getRuntimeContext().getIndexOfThisSubtask();

	this.offsetCommitCallback = new KafkaCommitCallback() {
		@Override
		public void onSuccess() {
			successfulCommits.inc();
		}

		@Override
		public void onException(Throwable cause) {
			LOG.warn(String.format("Consumer subtask %d failed async Kafka commit.", subtaskIndex), cause);
			failedCommits.inc();
		}
	};

	// mark the subtask as temporarily idle if there are no initial seed partitions;
	// once this subtask discovers some partitions and starts collecting records, the subtask's
	// status will automatically be triggered back to be active.
	if (subscribedPartitionsToStartOffsets.isEmpty()) {
		sourceContext.markAsTemporarilyIdle();
	}

	LOG.info("Consumer subtask {} creating fetcher with offsets {}.",
		getRuntimeContext().getIndexOfThisSubtask(), subscribedPartitionsToStartOffsets);
	// from this point forward:
	//   - 'snapshotState' will draw offsets from the fetcher,
	//     instead of being built from `subscribedPartitionsToStartOffsets`
	//   - 'notifyCheckpointComplete' will start to do work (i.e. commit offsets to
	//     Kafka through the fetcher, if configured to do so)
	this.kafkaFetcher = createFetcher(
			sourceContext,
			subscribedPartitionsToStartOffsets,
			periodicWatermarkAssigner,
			punctuatedWatermarkAssigner,
			(StreamingRuntimeContext) getRuntimeContext(),
			offsetCommitMode,
			getRuntimeContext().getMetricGroup().addGroup(KAFKA_CONSUMER_METRICS_GROUP),
			useMetrics);

	if (!running) {
		return;
	}

	// depending on whether we were restored with the current state version (1.3),
	// remaining logic branches off into 2 paths:
	//  1) New state - partition discovery loop executed as separate thread, with this
	//                 thread running the main fetcher loop
	//  2) Old state - partition discovery is disabled and only the main fetcher loop is executed
	if (discoveryIntervalMillis == PARTITION_DISCOVERY_DISABLED) {
		kafkaFetcher.runFetchLoop();
	} else {
		runWithPartitionDiscovery();
	}
}
 
Example #22
Source File: KafkaConsumerThreadTest.java    From flink with Apache License 2.0 4 votes vote down vote up
/**
 * Tests reassignment works correctly in the case when:
 *  - the consumer has no initial assignments
 *  - new unassigned partitions have undefined offsets
 *  - the consumer was woken up prior to the reassignment
 *
 * <p>In this case, reassignment should not have occurred at all, and the consumer retains the original assignment.
 *
 * <p>Setting a timeout because the test will not finish if there is logic error with
 * the reassignment flow.
 */
@SuppressWarnings("unchecked")
@Test(timeout = 10000)
public void testReassignPartitionsDefinedOffsetsWithoutInitialAssignmentsWhenEarlyWakeup() throws Exception {
	final String testTopic = "test-topic";

	// -------- new partitions with defined offsets --------

	KafkaTopicPartitionState<TopicPartition> newPartition1 = new KafkaTopicPartitionState<>(
		new KafkaTopicPartition(testTopic, 0), new TopicPartition(testTopic, 0));
	newPartition1.setOffset(KafkaTopicPartitionStateSentinel.EARLIEST_OFFSET);

	KafkaTopicPartitionState<TopicPartition> newPartition2 = new KafkaTopicPartitionState<>(
		new KafkaTopicPartition(testTopic, 1), new TopicPartition(testTopic, 1));
	newPartition2.setOffset(KafkaTopicPartitionStateSentinel.EARLIEST_OFFSET);

	List<KafkaTopicPartitionState<TopicPartition>> newPartitions = new ArrayList<>(2);
	newPartitions.add(newPartition1);
	newPartitions.add(newPartition2);

	// -------- setup mock KafkaConsumer --------

	// no initial assignments
	final Map<TopicPartition, Long> mockConsumerAssignmentsAndPositions = new LinkedHashMap<>();

	// mock retrieved values that should replace the EARLIEST_OFFSET sentinels
	final Map<TopicPartition, Long> mockRetrievedPositions = new HashMap<>();
	mockRetrievedPositions.put(newPartition1.getKafkaPartitionHandle(), 23L);
	mockRetrievedPositions.put(newPartition2.getKafkaPartitionHandle(), 32L);

	final KafkaConsumer<byte[], byte[]> mockConsumer = createMockConsumer(
			mockConsumerAssignmentsAndPositions,
			mockRetrievedPositions,
			true,
			null,
			null);

	// -------- setup new partitions to be polled from the unassigned partitions queue --------

	final ClosableBlockingQueue<KafkaTopicPartitionState<TopicPartition>> unassignedPartitionsQueue =
		new ClosableBlockingQueue<>();

	for (KafkaTopicPartitionState<TopicPartition> newPartition : newPartitions) {
		unassignedPartitionsQueue.add(newPartition);
	}

	// -------- start test --------

	final TestKafkaConsumerThread testThread =
		new TestKafkaConsumerThread(mockConsumer, unassignedPartitionsQueue, new Handover());
	testThread.start();

	// pause just before the reassignment so we can inject the wakeup
	testThread.waitPartitionReassignmentInvoked();

	testThread.setOffsetsToCommit(new HashMap<TopicPartition, OffsetAndMetadata>(), mock(KafkaCommitCallback.class));

	// make sure the consumer was actually woken up
	verify(mockConsumer, times(1)).wakeup();

	testThread.startPartitionReassignment();
	testThread.waitPartitionReassignmentComplete();

	// the consumer's assignment should have remained untouched (in this case, empty)
	assertEquals(0, mockConsumerAssignmentsAndPositions.size());

	// the new partitions should have been re-added to the unassigned partitions queue
	assertEquals(2, unassignedPartitionsQueue.size());
}
 
Example #23
Source File: KafkaConsumerThread.java    From flink with Apache License 2.0 4 votes vote down vote up
CommitCallback(KafkaCommitCallback internalCommitCallback) {
	this.internalCommitCallback = checkNotNull(internalCommitCallback);
}
 
Example #24
Source File: FlinkKafkaConsumerBase.java    From Flink-CEPplus with Apache License 2.0 4 votes vote down vote up
@Override
public void run(SourceContext<T> sourceContext) throws Exception {
	if (subscribedPartitionsToStartOffsets == null) {
		throw new Exception("The partitions were not set for the consumer");
	}

	// initialize commit metrics and default offset callback method
	this.successfulCommits = this.getRuntimeContext().getMetricGroup().counter(COMMITS_SUCCEEDED_METRICS_COUNTER);
	this.failedCommits =  this.getRuntimeContext().getMetricGroup().counter(COMMITS_FAILED_METRICS_COUNTER);

	this.offsetCommitCallback = new KafkaCommitCallback() {
		@Override
		public void onSuccess() {
			successfulCommits.inc();
		}

		@Override
		public void onException(Throwable cause) {
			LOG.warn("Async Kafka commit failed.", cause);
			failedCommits.inc();
		}
	};

	// mark the subtask as temporarily idle if there are no initial seed partitions;
	// once this subtask discovers some partitions and starts collecting records, the subtask's
	// status will automatically be triggered back to be active.
	if (subscribedPartitionsToStartOffsets.isEmpty()) {
		sourceContext.markAsTemporarilyIdle();
	}

	// from this point forward:
	//   - 'snapshotState' will draw offsets from the fetcher,
	//     instead of being built from `subscribedPartitionsToStartOffsets`
	//   - 'notifyCheckpointComplete' will start to do work (i.e. commit offsets to
	//     Kafka through the fetcher, if configured to do so)
	this.kafkaFetcher = createFetcher(
			sourceContext,
			subscribedPartitionsToStartOffsets,
			periodicWatermarkAssigner,
			punctuatedWatermarkAssigner,
			(StreamingRuntimeContext) getRuntimeContext(),
			offsetCommitMode,
			getRuntimeContext().getMetricGroup().addGroup(KAFKA_CONSUMER_METRICS_GROUP),
			useMetrics);

	if (!running) {
		return;
	}

	// depending on whether we were restored with the current state version (1.3),
	// remaining logic branches off into 2 paths:
	//  1) New state - partition discovery loop executed as separate thread, with this
	//                 thread running the main fetcher loop
	//  2) Old state - partition discovery is disabled and only the main fetcher loop is executed
	if (discoveryIntervalMillis == PARTITION_DISCOVERY_DISABLED) {
		kafkaFetcher.runFetchLoop();
	} else {
		runWithPartitionDiscovery();
	}
}
 
Example #25
Source File: KafkaConsumerThread.java    From Flink-CEPplus with Apache License 2.0 4 votes vote down vote up
CommitCallback(KafkaCommitCallback internalCommitCallback) {
	this.internalCommitCallback = checkNotNull(internalCommitCallback);
}
 
Example #26
Source File: KafkaConsumerThreadTest.java    From Flink-CEPplus with Apache License 2.0 4 votes vote down vote up
/**
 * Tests reassignment works correctly in the case when:
 *  - the consumer has no initial assignments
 *  - new unassigned partitions have undefined offsets
 *  - the consumer was woken up prior to the reassignment
 *
 * <p>In this case, reassignment should not have occurred at all, and the consumer retains the original assignment.
 *
 * <p>Setting a timeout because the test will not finish if there is logic error with
 * the reassignment flow.
 */
@SuppressWarnings("unchecked")
@Test(timeout = 10000)
public void testReassignPartitionsDefinedOffsetsWithoutInitialAssignmentsWhenEarlyWakeup() throws Exception {
	final String testTopic = "test-topic";

	// -------- new partitions with defined offsets --------

	KafkaTopicPartitionState<TopicPartition> newPartition1 = new KafkaTopicPartitionState<>(
		new KafkaTopicPartition(testTopic, 0), new TopicPartition(testTopic, 0));
	newPartition1.setOffset(KafkaTopicPartitionStateSentinel.EARLIEST_OFFSET);

	KafkaTopicPartitionState<TopicPartition> newPartition2 = new KafkaTopicPartitionState<>(
		new KafkaTopicPartition(testTopic, 1), new TopicPartition(testTopic, 1));
	newPartition2.setOffset(KafkaTopicPartitionStateSentinel.EARLIEST_OFFSET);

	List<KafkaTopicPartitionState<TopicPartition>> newPartitions = new ArrayList<>(2);
	newPartitions.add(newPartition1);
	newPartitions.add(newPartition2);

	// -------- setup mock KafkaConsumer --------

	// no initial assignments
	final Map<TopicPartition, Long> mockConsumerAssignmentsAndPositions = new LinkedHashMap<>();

	// mock retrieved values that should replace the EARLIEST_OFFSET sentinels
	final Map<TopicPartition, Long> mockRetrievedPositions = new HashMap<>();
	mockRetrievedPositions.put(newPartition1.getKafkaPartitionHandle(), 23L);
	mockRetrievedPositions.put(newPartition2.getKafkaPartitionHandle(), 32L);

	final KafkaConsumer<byte[], byte[]> mockConsumer = createMockConsumer(
			mockConsumerAssignmentsAndPositions,
			mockRetrievedPositions,
			true,
			null,
			null);

	// -------- setup new partitions to be polled from the unassigned partitions queue --------

	final ClosableBlockingQueue<KafkaTopicPartitionState<TopicPartition>> unassignedPartitionsQueue =
		new ClosableBlockingQueue<>();

	for (KafkaTopicPartitionState<TopicPartition> newPartition : newPartitions) {
		unassignedPartitionsQueue.add(newPartition);
	}

	// -------- start test --------

	final TestKafkaConsumerThread testThread =
		new TestKafkaConsumerThread(mockConsumer, unassignedPartitionsQueue, new Handover());
	testThread.start();

	// pause just before the reassignment so we can inject the wakeup
	testThread.waitPartitionReassignmentInvoked();

	testThread.setOffsetsToCommit(new HashMap<TopicPartition, OffsetAndMetadata>(), mock(KafkaCommitCallback.class));

	// make sure the consumer was actually woken up
	verify(mockConsumer, times(1)).wakeup();

	testThread.startPartitionReassignment();
	testThread.waitPartitionReassignmentComplete();

	// the consumer's assignment should have remained untouched (in this case, empty)
	assertEquals(0, mockConsumerAssignmentsAndPositions.size());

	// the new partitions should have been re-added to the unassigned partitions queue
	assertEquals(2, unassignedPartitionsQueue.size());
}
 
Example #27
Source File: KafkaConsumerThread.java    From Flink-CEPplus with Apache License 2.0 4 votes vote down vote up
CommitCallback(KafkaCommitCallback internalCommitCallback) {
	this.internalCommitCallback = checkNotNull(internalCommitCallback);
}
 
Example #28
Source File: FlinkKafkaConsumerBaseTest.java    From flink with Apache License 2.0 2 votes vote down vote up
@Override
protected void doCommitInternalOffsetsToKafka(Map<KafkaTopicPartition, Long> offsets, @Nonnull KafkaCommitCallback commitCallback) throws Exception {

}
 
Example #29
Source File: FlinkKafkaConsumerBaseTest.java    From Flink-CEPplus with Apache License 2.0 2 votes vote down vote up
@Override
protected void doCommitInternalOffsetsToKafka(Map<KafkaTopicPartition, Long> offsets, @Nonnull KafkaCommitCallback commitCallback) throws Exception {

}
 
Example #30
Source File: FlinkKafkaConsumerBaseTest.java    From flink with Apache License 2.0 2 votes vote down vote up
@Override
protected void doCommitInternalOffsetsToKafka(Map<KafkaTopicPartition, Long> offsets, @Nonnull KafkaCommitCallback commitCallback) throws Exception {

}