Java Code Examples for org.apache.flink.util.ExceptionUtils#rethrowException()
The following examples show how to use
org.apache.flink.util.ExceptionUtils#rethrowException() .
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: Handover.java From flink with Apache License 2.0 | 6 votes |
/** * Polls the next element from the Handover, possibly blocking until the next element is * available. This method behaves similar to polling from a blocking queue. * * <p>If an exception was handed in by the producer ({@link #reportError(Throwable)}), then * that exception is thrown rather than an element being returned. * * @return The next element (buffer of records, never null). * * @throws ClosedException Thrown if the Handover was {@link #close() closed}. * @throws Exception Rethrows exceptions from the {@link #reportError(Throwable)} method. */ @Nonnull public ConsumerRecords<byte[], byte[]> pollNext() throws Exception { synchronized (lock) { while (next == null && error == null) { lock.wait(); } ConsumerRecords<byte[], byte[]> n = next; if (n != null) { next = null; lock.notifyAll(); return n; } else { ExceptionUtils.rethrowException(error, error.getMessage()); // this statement cannot be reached since the above method always throws an exception // this is only here to silence the compiler and any warnings return ConsumerRecords.empty(); } } }
Example 2
Source File: Handover.java From flink with Apache License 2.0 | 6 votes |
/** * Polls the next element from the Handover, possibly blocking until the next element is * available. This method behaves similar to polling from a blocking queue. * * <p>If an exception was handed in by the producer ({@link #reportError(Throwable)}), then * that exception is thrown rather than an element being returned. * * @return The next element (buffer of records, never null). * * @throws ClosedException Thrown if the Handover was {@link #close() closed}. * @throws Exception Rethrows exceptions from the {@link #reportError(Throwable)} method. */ @Nonnull public ConsumerRecords<byte[], byte[]> pollNext() throws Exception { synchronized (lock) { while (next == null && error == null) { lock.wait(); } ConsumerRecords<byte[], byte[]> n = next; if (n != null) { next = null; lock.notifyAll(); return n; } else { ExceptionUtils.rethrowException(error, error.getMessage()); // this statement cannot be reached since the above method always throws an exception // this is only here to silence the compiler and any warnings return ConsumerRecords.empty(); } } }
Example 3
Source File: JobLeaderIdService.java From flink with Apache License 2.0 | 6 votes |
/** * Stop and clear the currently registered job leader id listeners. * * @throws Exception which is thrown in case a retrieval service cannot be stopped properly */ public void clear() throws Exception { Exception exception = null; for (JobLeaderIdListener listener: jobLeaderIdListeners.values()) { try { listener.stop(); } catch (Exception e) { exception = ExceptionUtils.firstOrSuppressed(e, exception); } } if (exception != null) { ExceptionUtils.rethrowException(exception, "Could not properly stop the " + JobLeaderIdService.class.getSimpleName() + '.'); } jobLeaderIdListeners.clear(); }
Example 4
Source File: StreamContextEnvironment.java From flink with Apache License 2.0 | 6 votes |
@Override public JobExecutionResult execute(StreamGraph streamGraph) throws Exception { final JobClient jobClient = executeAsync(streamGraph); final List<JobListener> jobListeners = getJobListeners(); try { final JobExecutionResult jobExecutionResult = getJobExecutionResult(jobClient); jobListeners.forEach(jobListener -> jobListener.onJobExecuted(jobExecutionResult, null)); return jobExecutionResult; } catch (Throwable t) { jobListeners.forEach(jobListener -> jobListener.onJobExecuted(null, ExceptionUtils.stripExecutionException(t))); ExceptionUtils.rethrowException(t); // never reached, only make javac happy return null; } }
Example 5
Source File: Handover.java From flink with Apache License 2.0 | 6 votes |
/** * Polls the next element from the Handover, possibly blocking until the next element is * available. This method behaves similar to polling from a blocking queue. * * <p>If an exception was handed in by the producer ({@link #reportError(Throwable)}), then * that exception is thrown rather than an element being returned. * * @return The next element (buffer of records, never null). * * @throws ClosedException Thrown if the Handover was {@link #close() closed}. * @throws Exception Rethrows exceptions from the {@link #reportError(Throwable)} method. */ @Nonnull public ConsumerRecords<byte[], byte[]> pollNext() throws Exception { synchronized (lock) { while (next == null && error == null) { lock.wait(); } ConsumerRecords<byte[], byte[]> n = next; if (n != null) { next = null; lock.notifyAll(); return n; } else { ExceptionUtils.rethrowException(error, error.getMessage()); // this statement cannot be reached since the above method always throws an exception // this is only here to silence the compiler and any warnings return ConsumerRecords.empty(); } } }
Example 6
Source File: JobManagerSharedServices.java From flink with Apache License 2.0 | 6 votes |
/** * Shutdown the {@link JobMaster} services. * * <p>This method makes sure all services are closed or shut down, even when an exception occurred * in the shutdown of one component. The first encountered exception is thrown, with successive * exceptions added as suppressed exceptions. * * @throws Exception The first Exception encountered during shutdown. */ public void shutdown() throws Exception { Throwable firstException = null; try { scheduledExecutorService.shutdownNow(); } catch (Throwable t) { firstException = t; } libraryCacheManager.shutdown(); backPressureSampleCoordinator.shutDown(); backPressureStatsTracker.shutDown(); if (firstException != null) { ExceptionUtils.rethrowException(firstException, "Error while shutting down JobManager services"); } }
Example 7
Source File: Handover.java From Flink-CEPplus with Apache License 2.0 | 6 votes |
/** * Polls the next element from the Handover, possibly blocking until the next element is * available. This method behaves similar to polling from a blocking queue. * * <p>If an exception was handed in by the producer ({@link #reportError(Throwable)}), then * that exception is thrown rather than an element being returned. * * @return The next element (buffer of records, never null). * * @throws ClosedException Thrown if the Handover was {@link #close() closed}. * @throws Exception Rethrows exceptions from the {@link #reportError(Throwable)} method. */ @Nonnull public ConsumerRecords<byte[], byte[]> pollNext() throws Exception { synchronized (lock) { while (next == null && error == null) { lock.wait(); } ConsumerRecords<byte[], byte[]> n = next; if (n != null) { next = null; lock.notifyAll(); return n; } else { ExceptionUtils.rethrowException(error, error.getMessage()); // this statement cannot be reached since the above method always throws an exception // this is only here to silence the compiler and any warnings return ConsumerRecords.empty(); } } }
Example 8
Source File: HadoopFreeFsFactoryTest.java From flink with Apache License 2.0 | 6 votes |
/** * This test validates that the factory can be instantiated and configured even * when Hadoop classes are missing from the classpath. */ @Test public void testHadoopFactoryInstantiationWithoutHadoop() throws Exception { // we do reflection magic here to instantiate the test in another class // loader, to make sure no hadoop classes are in the classpath final String testClassName = "org.apache.flink.runtime.fs.hdfs.HadoopFreeTests"; final URL[] urls = ClassLoaderUtils.getClasspathURLs(); ClassLoader parent = getClass().getClassLoader(); ClassLoader hadoopFreeClassLoader = new HadoopFreeClassLoader(urls, parent); Class<?> testClass = Class.forName(testClassName, false, hadoopFreeClassLoader); Method m = testClass.getDeclaredMethod("test"); try { m.invoke(null); } catch (InvocationTargetException e) { ExceptionUtils.rethrowException(e.getTargetException(), "exception in method"); } }
Example 9
Source File: ZooKeeperHaServices.java From Flink-CEPplus with Apache License 2.0 | 6 votes |
@Override public void close() throws Exception { Throwable exception = null; try { blobStoreService.close(); } catch (Throwable t) { exception = t; } internalClose(); if (exception != null) { ExceptionUtils.rethrowException(exception, "Could not properly close the ZooKeeperHaServices."); } }
Example 10
Source File: JobLeaderIdService.java From Flink-CEPplus with Apache License 2.0 | 6 votes |
/** * Stop and clear the currently registered job leader id listeners. * * @throws Exception which is thrown in case a retrieval service cannot be stopped properly */ public void clear() throws Exception { Exception exception = null; for (JobLeaderIdListener listener: jobLeaderIdListeners.values()) { try { listener.stop(); } catch (Exception e) { exception = ExceptionUtils.firstOrSuppressed(e, exception); } } if (exception != null) { ExceptionUtils.rethrowException(exception, "Could not properly stop the " + JobLeaderIdService.class.getSimpleName() + '.'); } jobLeaderIdListeners.clear(); }
Example 11
Source File: JobManagerSharedServices.java From Flink-CEPplus with Apache License 2.0 | 6 votes |
/** * Shutdown the {@link JobMaster} services. * * <p>This method makes sure all services are closed or shut down, even when an exception occurred * in the shutdown of one component. The first encountered exception is thrown, with successive * exceptions added as suppressed exceptions. * * @throws Exception The first Exception encountered during shutdown. */ public void shutdown() throws Exception { Throwable firstException = null; try { scheduledExecutorService.shutdownNow(); } catch (Throwable t) { firstException = t; } libraryCacheManager.shutdown(); stackTraceSampleCoordinator.shutDown(); backPressureStatsTracker.shutDown(); if (firstException != null) { ExceptionUtils.rethrowException(firstException, "Error while shutting down JobManager services"); } }
Example 12
Source File: HadoopFreeFsFactoryTest.java From Flink-CEPplus with Apache License 2.0 | 6 votes |
/** * This test validates that the factory can be instantiated and configured even * when Hadoop classes are missing from the classpath. */ @Test public void testHadoopFactoryInstantiationWithoutHadoop() throws Exception { // we do reflection magic here to instantiate the test in another class // loader, to make sure no hadoop classes are in the classpath final String testClassName = "org.apache.flink.runtime.fs.hdfs.HadoopFreeTests"; URLClassLoader parent = (URLClassLoader) getClass().getClassLoader(); ClassLoader hadoopFreeClassLoader = new HadoopFreeClassLoader(parent); Class<?> testClass = Class.forName(testClassName, false, hadoopFreeClassLoader); Method m = testClass.getDeclaredMethod("test"); try { m.invoke(null); } catch (InvocationTargetException e) { ExceptionUtils.rethrowException(e.getTargetException(), "exception in method"); } }
Example 13
Source File: MapRFsFactoryTest.java From Flink-CEPplus with Apache License 2.0 | 6 votes |
/** * This test validates that the factory can be instantiated and configured even * when MapR and Hadoop classes are missing from the classpath. */ @Test public void testInstantiationWithoutMapRClasses() throws Exception { // we do reflection magic here to instantiate the test in another class // loader, to make sure no MapR and Hadoop classes are in the classpath final String testClassName = "org.apache.flink.runtime.fs.maprfs.MapRFreeTests"; URLClassLoader parent = (URLClassLoader) getClass().getClassLoader(); ClassLoader maprFreeClassLoader = new MapRFreeClassLoader(parent); Class<?> testClass = Class.forName(testClassName, false, maprFreeClassLoader); Method m = testClass.getDeclaredMethod("test"); try { m.invoke(null); } catch (InvocationTargetException e) { ExceptionUtils.rethrowException(e.getTargetException(), "exception in method"); } }
Example 14
Source File: JobLeaderIdService.java From flink with Apache License 2.0 | 6 votes |
/** * Stop and clear the currently registered job leader id listeners. * * @throws Exception which is thrown in case a retrieval service cannot be stopped properly */ public void clear() throws Exception { Exception exception = null; for (JobLeaderIdListener listener: jobLeaderIdListeners.values()) { try { listener.stop(); } catch (Exception e) { exception = ExceptionUtils.firstOrSuppressed(e, exception); } } if (exception != null) { ExceptionUtils.rethrowException(exception, "Could not properly stop the " + JobLeaderIdService.class.getSimpleName() + '.'); } jobLeaderIdListeners.clear(); }
Example 15
Source File: RestClusterClient.java From flink with Apache License 2.0 | 5 votes |
@Override public Map<String, OptionalFailure<Object>> getAccumulators(final JobID jobID, ClassLoader loader) throws Exception { final JobAccumulatorsHeaders accumulatorsHeaders = JobAccumulatorsHeaders.getInstance(); final JobAccumulatorsMessageParameters accMsgParams = accumulatorsHeaders.getUnresolvedMessageParameters(); accMsgParams.jobPathParameter.resolve(jobID); accMsgParams.includeSerializedAccumulatorsParameter.resolve(Collections.singletonList(true)); CompletableFuture<JobAccumulatorsInfo> responseFuture = sendRequest( accumulatorsHeaders, accMsgParams); Map<String, OptionalFailure<Object>> result = Collections.emptyMap(); try { result = responseFuture.thenApply((JobAccumulatorsInfo accumulatorsInfo) -> { try { return AccumulatorHelper.deserializeAccumulators( accumulatorsInfo.getSerializedUserAccumulators(), loader); } catch (Exception e) { throw new CompletionException( new FlinkException( String.format("Deserialization of accumulators for job %s failed.", jobID), e)); } }).get(timeout.toMillis(), TimeUnit.MILLISECONDS); } catch (ExecutionException ee) { ExceptionUtils.rethrowException(ExceptionUtils.stripExecutionException(ee)); } return result; }
Example 16
Source File: ExecutionGraphRestartTest.java From Flink-CEPplus with Apache License 2.0 | 4 votes |
@Test public void testRestartWithSlotSharingAndNotEnoughResources() throws Exception { // this test is inconclusive if not used with a proper multi-threaded executor assertTrue("test assumptions violated", ((ThreadPoolExecutor) executor).getCorePoolSize() > 1); final int numRestarts = 10; final int parallelism = 20; TaskManagerGateway taskManagerGateway = new SimpleAckingTaskManagerGateway(); final Scheduler scheduler = createSchedulerWithInstances(parallelism - 1, taskManagerGateway); final SlotSharingGroup sharingGroup = new SlotSharingGroup(); final JobVertex source = new JobVertex("source"); source.setInvokableClass(NoOpInvokable.class); source.setParallelism(parallelism); source.setSlotSharingGroup(sharingGroup); final JobVertex sink = new JobVertex("sink"); sink.setInvokableClass(NoOpInvokable.class); sink.setParallelism(parallelism); sink.setSlotSharingGroup(sharingGroup); sink.connectNewDataSetAsInput(source, DistributionPattern.POINTWISE, ResultPartitionType.PIPELINED_BOUNDED); TestRestartStrategy restartStrategy = new TestRestartStrategy(numRestarts, false); final ExecutionGraph eg = ExecutionGraphTestUtils.createExecutionGraph( new JobID(), scheduler, restartStrategy, executor, source, sink); eg.start(mainThreadExecutor); eg.setScheduleMode(ScheduleMode.EAGER); eg.scheduleForExecution(); // wait until no more changes happen while (eg.getNumberOfFullRestarts() < numRestarts) { Thread.sleep(1); } assertEquals(JobStatus.FAILED, eg.getState()); final Throwable t = eg.getFailureCause(); if (!(t instanceof NoResourceAvailableException)) { ExceptionUtils.rethrowException(t, t.getMessage()); } }
Example 17
Source File: HandoverTest.java From flink with Apache License 2.0 | 4 votes |
public void sync() throws Exception { join(); if (error != null) { ExceptionUtils.rethrowException(error, error.getMessage()); } }
Example 18
Source File: ExecutionGraphNotEnoughResourceTest.java From flink with Apache License 2.0 | 4 votes |
@Test public void testRestartWithSlotSharingAndNotEnoughResources() throws Exception { final int numRestarts = 10; final int parallelism = 20; SlotPool slotPool = null; try { slotPool = new TestingSlotPoolImpl(TEST_JOB_ID); final Scheduler scheduler = createSchedulerWithSlots( parallelism - 1, slotPool, new LocalTaskManagerLocation()); final SlotSharingGroup sharingGroup = new SlotSharingGroup(); final JobVertex source = new JobVertex("source"); source.setInvokableClass(NoOpInvokable.class); source.setParallelism(parallelism); source.setSlotSharingGroup(sharingGroup); final JobVertex sink = new JobVertex("sink"); sink.setInvokableClass(NoOpInvokable.class); sink.setParallelism(parallelism); sink.setSlotSharingGroup(sharingGroup); sink.connectNewDataSetAsInput(source, DistributionPattern.POINTWISE, ResultPartitionType.PIPELINED_BOUNDED); final JobGraph jobGraph = new JobGraph(TEST_JOB_ID, "Test Job", source, sink); jobGraph.setScheduleMode(ScheduleMode.EAGER); TestRestartStrategy restartStrategy = new TestRestartStrategy(numRestarts, false); final ExecutionGraph eg = TestingExecutionGraphBuilder .newBuilder() .setJobGraph(jobGraph) .setSlotProvider(scheduler) .setRestartStrategy(restartStrategy) .setAllocationTimeout(Time.milliseconds(1L)) .build(); eg.start(mainThreadExecutor); mainThreadExecutor.execute(ThrowingRunnable.unchecked(eg::scheduleForExecution)); CommonTestUtils.waitUntilCondition( () -> CompletableFuture.supplyAsync(eg::getState, mainThreadExecutor).join() == JobStatus.FAILED, Deadline.fromNow(Duration.ofMillis(2000))); // the last suppressed restart is also counted assertEquals(numRestarts + 1, CompletableFuture.supplyAsync(eg::getNumberOfRestarts, mainThreadExecutor).join().longValue()); final Throwable t = CompletableFuture.supplyAsync(eg::getFailureCause, mainThreadExecutor).join(); if (!(t instanceof NoResourceAvailableException)) { ExceptionUtils.rethrowException(t, t.getMessage()); } } finally { if (slotPool != null) { CompletableFuture.runAsync(slotPool::close, mainThreadExecutor).join(); } } }
Example 19
Source File: ExecutionGraphRestartTest.java From flink with Apache License 2.0 | 4 votes |
@Test public void testRestartWithSlotSharingAndNotEnoughResources() throws Exception { // this test is inconclusive if not used with a proper multi-threaded executor assertTrue("test assumptions violated", ((ThreadPoolExecutor) executor).getCorePoolSize() > 1); final int numRestarts = 10; final int parallelism = 20; try (SlotPool slotPool = createSlotPoolImpl()) { final Scheduler scheduler = createSchedulerWithSlots( parallelism - 1, slotPool, new LocalTaskManagerLocation()); final SlotSharingGroup sharingGroup = new SlotSharingGroup(); final JobVertex source = new JobVertex("source"); source.setInvokableClass(NoOpInvokable.class); source.setParallelism(parallelism); source.setSlotSharingGroup(sharingGroup); final JobVertex sink = new JobVertex("sink"); sink.setInvokableClass(NoOpInvokable.class); sink.setParallelism(parallelism); sink.setSlotSharingGroup(sharingGroup); sink.connectNewDataSetAsInput(source, DistributionPattern.POINTWISE, ResultPartitionType.PIPELINED_BOUNDED); TestRestartStrategy restartStrategy = new TestRestartStrategy(numRestarts, false); final ExecutionGraph eg = new ExecutionGraphTestUtils.TestingExecutionGraphBuilder(TEST_JOB_ID, source, sink) .setSlotProvider(scheduler) .setRestartStrategy(restartStrategy) .setIoExecutor(executor) .setFutureExecutor(executor) .setScheduleMode(ScheduleMode.EAGER) .build(); eg.start(mainThreadExecutor); eg.scheduleForExecution(); // wait until no more changes happen while (eg.getNumberOfFullRestarts() < numRestarts) { Thread.sleep(1); } assertEquals(JobStatus.FAILED, eg.getState()); final Throwable t = eg.getFailureCause(); if (!(t instanceof NoResourceAvailableException)) { ExceptionUtils.rethrowException(t, t.getMessage()); } } }
Example 20
Source File: HandoverTest.java From Flink-CEPplus with Apache License 2.0 | 4 votes |
public void sync() throws Exception { join(); if (error != null) { ExceptionUtils.rethrowException(error, error.getMessage()); } }