Java Code Examples for org.apache.flink.api.java.io.TextInputFormat#setFilesFilter()
The following examples show how to use
org.apache.flink.api.java.io.TextInputFormat#setFilesFilter() .
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: ContinuousFileProcessingCheckpointITCase.java From flink with Apache License 2.0 | 6 votes |
@Override public void testProgram(StreamExecutionEnvironment env) { env.enableCheckpointing(10); // create and start the file creating thread. fc = new FileCreator(); fc.start(); // create the monitoring source along with the necessary readers. TextInputFormat format = new TextInputFormat(new org.apache.flink.core.fs.Path(localFsURI)); format.setFilesFilter(FilePathFilter.createDefaultFilter()); DataStream<String> inputStream = env.readFile(format, localFsURI, FileProcessingMode.PROCESS_CONTINUOUSLY, INTERVAL); TestingSinkFunction sink = new TestingSinkFunction(); inputStream.flatMap(new FlatMapFunction<String, String>() { @Override public void flatMap(String value, Collector<String> out) throws Exception { out.collect(value); } }).addSink(sink).setParallelism(1); }
Example 2
Source File: GlobExample.java From flink-examples with MIT License | 6 votes |
public static void main(String... args) throws Exception { File txtFile = new File("/tmp/test/file.txt"); File csvFile = new File("/tmp/test/file.csv"); File binFile = new File("/tmp/test/file.bin"); writeToFile(txtFile, "txt"); writeToFile(csvFile, "csv"); writeToFile(binFile, "bin"); final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment(); final TextInputFormat format = new TextInputFormat(new Path("/tmp/test")); GlobFilePathFilter filesFilter = new GlobFilePathFilter( Collections.singletonList("**"), Arrays.asList("**/file.bin") ); System.out.println(Arrays.toString(GlobFilePathFilter.class.getDeclaredFields())); format.setFilesFilter(filesFilter); DataSet<String> result = env.readFile(format, "/tmp"); result.writeAsText("/temp/out"); env.execute("GlobFilePathFilter-Test"); }
Example 3
Source File: ContinuousFileProcessingCheckpointITCase.java From Flink-CEPplus with Apache License 2.0 | 5 votes |
@Override public void testProgram(StreamExecutionEnvironment env) { // set the restart strategy. env.getConfig().setRestartStrategy(RestartStrategies.fixedDelayRestart(NO_OF_RETRIES, 0)); env.enableCheckpointing(10); // create and start the file creating thread. fc = new FileCreator(); fc.start(); // create the monitoring source along with the necessary readers. TextInputFormat format = new TextInputFormat(new org.apache.flink.core.fs.Path(localFsURI)); format.setFilesFilter(FilePathFilter.createDefaultFilter()); DataStream<String> inputStream = env.readFile(format, localFsURI, FileProcessingMode.PROCESS_CONTINUOUSLY, INTERVAL); TestingSinkFunction sink = new TestingSinkFunction(); inputStream.flatMap(new FlatMapFunction<String, String>() { @Override public void flatMap(String value, Collector<String> out) throws Exception { out.collect(value); } }).addSink(sink).setParallelism(1); }
Example 4
Source File: ContinuousFileProcessingTest.java From Flink-CEPplus with Apache License 2.0 | 5 votes |
@Test public void testSortingOnModTime() throws Exception { String testBasePath = hdfsURI + "/" + UUID.randomUUID() + "/"; final long[] modTimes = new long[NO_OF_FILES]; final org.apache.hadoop.fs.Path[] filesCreated = new org.apache.hadoop.fs.Path[NO_OF_FILES]; for (int i = 0; i < NO_OF_FILES; i++) { Tuple2<org.apache.hadoop.fs.Path, String> file = createFileAndFillWithData(testBasePath, "file", i, "This is test line."); Thread.sleep(400); filesCreated[i] = file.f0; modTimes[i] = hdfs.getFileStatus(file.f0).getModificationTime(); } TextInputFormat format = new TextInputFormat(new Path(testBasePath)); format.setFilesFilter(FilePathFilter.createDefaultFilter()); // this is just to verify that all splits have been forwarded later. FileInputSplit[] splits = format.createInputSplits(1); ContinuousFileMonitoringFunction<String> monitoringFunction = createTestContinuousFileMonitoringFunction(format, FileProcessingMode.PROCESS_ONCE); ModTimeVerifyingSourceContext context = new ModTimeVerifyingSourceContext(modTimes); monitoringFunction.open(new Configuration()); monitoringFunction.run(context); Assert.assertEquals(splits.length, context.getCounter()); // delete the created files. for (int i = 0; i < NO_OF_FILES; i++) { hdfs.delete(filesCreated[i], false); } }
Example 5
Source File: ContinuousFileProcessingTest.java From flink with Apache License 2.0 | 5 votes |
@Test public void testSortingOnModTime() throws Exception { String testBasePath = hdfsURI + "/" + UUID.randomUUID() + "/"; final long[] modTimes = new long[NO_OF_FILES]; final org.apache.hadoop.fs.Path[] filesCreated = new org.apache.hadoop.fs.Path[NO_OF_FILES]; for (int i = 0; i < NO_OF_FILES; i++) { Tuple2<org.apache.hadoop.fs.Path, String> file = createFileAndFillWithData(testBasePath, "file", i, "This is test line."); Thread.sleep(400); filesCreated[i] = file.f0; modTimes[i] = hdfs.getFileStatus(file.f0).getModificationTime(); } TextInputFormat format = new TextInputFormat(new Path(testBasePath)); format.setFilesFilter(FilePathFilter.createDefaultFilter()); // this is just to verify that all splits have been forwarded later. FileInputSplit[] splits = format.createInputSplits(1); ContinuousFileMonitoringFunction<String> monitoringFunction = createTestContinuousFileMonitoringFunction(format, FileProcessingMode.PROCESS_ONCE); ModTimeVerifyingSourceContext context = new ModTimeVerifyingSourceContext(modTimes); monitoringFunction.open(new Configuration()); monitoringFunction.run(context); Assert.assertEquals(splits.length, context.getCounter()); // delete the created files. for (int i = 0; i < NO_OF_FILES; i++) { hdfs.delete(filesCreated[i], false); } }
Example 6
Source File: ContinuousFileProcessingCheckpointITCase.java From flink with Apache License 2.0 | 5 votes |
@Override public void testProgram(StreamExecutionEnvironment env) { // set the restart strategy. env.getConfig().setRestartStrategy(RestartStrategies.fixedDelayRestart(NO_OF_RETRIES, 0)); env.enableCheckpointing(10); // create and start the file creating thread. fc = new FileCreator(); fc.start(); // create the monitoring source along with the necessary readers. TextInputFormat format = new TextInputFormat(new org.apache.flink.core.fs.Path(localFsURI)); format.setFilesFilter(FilePathFilter.createDefaultFilter()); DataStream<String> inputStream = env.readFile(format, localFsURI, FileProcessingMode.PROCESS_CONTINUOUSLY, INTERVAL); TestingSinkFunction sink = new TestingSinkFunction(); inputStream.flatMap(new FlatMapFunction<String, String>() { @Override public void flatMap(String value, Collector<String> out) throws Exception { out.collect(value); } }).addSink(sink).setParallelism(1); }
Example 7
Source File: ContinuousFileProcessingTest.java From flink with Apache License 2.0 | 5 votes |
@Test public void testSortingOnModTime() throws Exception { String testBasePath = hdfsURI + "/" + UUID.randomUUID() + "/"; final long[] modTimes = new long[NO_OF_FILES]; final org.apache.hadoop.fs.Path[] filesCreated = new org.apache.hadoop.fs.Path[NO_OF_FILES]; for (int i = 0; i < NO_OF_FILES; i++) { Tuple2<org.apache.hadoop.fs.Path, String> file = createFileAndFillWithData(testBasePath, "file", i, "This is test line."); Thread.sleep(400); filesCreated[i] = file.f0; modTimes[i] = hdfs.getFileStatus(file.f0).getModificationTime(); } TextInputFormat format = new TextInputFormat(new Path(testBasePath)); format.setFilesFilter(FilePathFilter.createDefaultFilter()); // this is just to verify that all splits have been forwarded later. FileInputSplit[] splits = format.createInputSplits(1); ContinuousFileMonitoringFunction<String> monitoringFunction = createTestContinuousFileMonitoringFunction(format, FileProcessingMode.PROCESS_ONCE); ModTimeVerifyingSourceContext context = new ModTimeVerifyingSourceContext(modTimes); monitoringFunction.open(new Configuration()); monitoringFunction.run(context); Assert.assertEquals(splits.length, context.getCounter()); // delete the created files. for (int i = 0; i < NO_OF_FILES; i++) { hdfs.delete(filesCreated[i], false); } }
Example 8
Source File: SimpleFlinkStreamingCounterDataSource.java From jMetalSP with MIT License | 5 votes |
@Override public void run() { JMetalLogger.logger.info("Run Fink method in the streaming data source invoked") ; JMetalLogger.logger.info("Directory: " + directoryName) ; // environment.getConfig().setRestartStrategy(RestartStrategies.fixedDelayRestart(1,0)); //environment.enableCheckpointing(10); Path filePath = new Path(directoryName); TextInputFormat inputFormat = new TextInputFormat(filePath); inputFormat.setFilesFilter(FilePathFilter.createDefaultFilter()); DataStreamSource<String> data =environment.readFile(inputFormat,directoryName, FileProcessingMode.PROCESS_CONTINUOUSLY,time); try { Iterator<String> it=DataStreamUtils.collect(data); while (it.hasNext()){ Integer number = Integer.parseInt(it.next()); observable.setChanged(); observable.notifyObservers(new ObservedValue<Integer>(number)); } } catch (Exception e){ e.printStackTrace(); } }
Example 9
Source File: ContinuousFileProcessingITCase.java From Flink-CEPplus with Apache License 2.0 | 4 votes |
@Test public void testProgram() throws Exception { /* * This test checks the interplay between the monitor and the reader * and also the failExternally() functionality. To test the latter we * set the parallelism to 1 so that we have the chaining between the sink, * which throws the SuccessException to signal the end of the test, and the * reader. * */ TextInputFormat format = new TextInputFormat(new Path(hdfsURI)); format.setFilePath(hdfsURI); format.setFilesFilter(FilePathFilter.createDefaultFilter()); // create the stream execution environment with a parallelism > 1 to test final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.setParallelism(PARALLELISM); ContinuousFileMonitoringFunction<String> monitoringFunction = new ContinuousFileMonitoringFunction<>(format, FileProcessingMode.PROCESS_CONTINUOUSLY, env.getParallelism(), INTERVAL); // the monitor has always DOP 1 DataStream<TimestampedFileInputSplit> splits = env.addSource(monitoringFunction); Assert.assertEquals(1, splits.getParallelism()); ContinuousFileReaderOperator<String> reader = new ContinuousFileReaderOperator<>(format); TypeInformation<String> typeInfo = TypeExtractor.getInputFormatTypes(format); // the readers can be multiple DataStream<String> content = splits.transform("FileSplitReader", typeInfo, reader); Assert.assertEquals(PARALLELISM, content.getParallelism()); // finally for the sink we set the parallelism to 1 so that we can verify the output TestingSinkFunction sink = new TestingSinkFunction(); content.addSink(sink).setParallelism(1); Thread job = new Thread() { @Override public void run() { try { env.execute("ContinuousFileProcessingITCase Job."); } catch (Exception e) { Throwable th = e; for (int depth = 0; depth < 20; depth++) { if (th instanceof SuccessException) { return; } else if (th.getCause() != null) { th = th.getCause(); } else { break; } } e.printStackTrace(); Assert.fail(e.getMessage()); } } }; job.start(); // The modification time of the last created file. long lastCreatedModTime = Long.MIN_VALUE; // create the files to be read for (int i = 0; i < NO_OF_FILES; i++) { Tuple2<org.apache.hadoop.fs.Path, String> tmpFile; long modTime; do { // give it some time so that the files have // different modification timestamps. Thread.sleep(50); tmpFile = fillWithData(hdfsURI, "file", i, "This is test line."); modTime = hdfs.getFileStatus(tmpFile.f0).getModificationTime(); if (modTime <= lastCreatedModTime) { // delete the last created file to recreate it with a different timestamp hdfs.delete(tmpFile.f0, false); } } while (modTime <= lastCreatedModTime); lastCreatedModTime = modTime; // put the contents in the expected results list before the reader picks them // this is to guarantee that they are in before the reader finishes (avoid race conditions) expectedContents.put(i, tmpFile.f1); org.apache.hadoop.fs.Path file = new org.apache.hadoop.fs.Path(hdfsURI + "/file" + i); hdfs.rename(tmpFile.f0, file); Assert.assertTrue(hdfs.exists(file)); } // wait for the job to finish. job.join(); }
Example 10
Source File: ContinuousFileProcessingTest.java From flink with Apache License 2.0 | 4 votes |
@Test public void testProcessContinuously() throws Exception { String testBasePath = hdfsURI + "/" + UUID.randomUUID() + "/"; final OneShotLatch latch = new OneShotLatch(); // create a single file in the directory Tuple2<org.apache.hadoop.fs.Path, String> bootstrap = createFileAndFillWithData(testBasePath, "file", NO_OF_FILES + 1, "This is test line."); Assert.assertTrue(hdfs.exists(bootstrap.f0)); final Set<String> filesToBeRead = new TreeSet<>(); filesToBeRead.add(bootstrap.f0.getName()); TextInputFormat format = new TextInputFormat(new Path(testBasePath)); format.setFilesFilter(FilePathFilter.createDefaultFilter()); final ContinuousFileMonitoringFunction<String> monitoringFunction = createTestContinuousFileMonitoringFunction(format, FileProcessingMode.PROCESS_CONTINUOUSLY); final int totalNoOfFilesToBeRead = NO_OF_FILES + 1; // 1 for the bootstrap + NO_OF_FILES final FileVerifyingSourceContext context = new FileVerifyingSourceContext(latch, monitoringFunction, 1, totalNoOfFilesToBeRead); final Thread t = new Thread() { @Override public void run() { try { monitoringFunction.open(new Configuration()); monitoringFunction.run(context); } catch (Exception e) { Assert.fail(e.getMessage()); } } }; t.start(); if (!latch.isTriggered()) { latch.await(); } // create some additional files that will be processed in the case of PROCESS_CONTINUOUSLY final org.apache.hadoop.fs.Path[] filesCreated = new org.apache.hadoop.fs.Path[NO_OF_FILES]; for (int i = 0; i < NO_OF_FILES; i++) { Tuple2<org.apache.hadoop.fs.Path, String> file = createFileAndFillWithData(testBasePath, "file", i, "This is test line."); filesCreated[i] = file.f0; filesToBeRead.add(file.f0.getName()); } // wait until the monitoring thread exits t.join(); Assert.assertArrayEquals(filesToBeRead.toArray(), context.getSeenFiles().toArray()); // finally delete the files created for the test. hdfs.delete(bootstrap.f0, false); for (org.apache.hadoop.fs.Path path: filesCreated) { hdfs.delete(path, false); } }
Example 11
Source File: ContinuousFileProcessingTest.java From flink with Apache License 2.0 | 4 votes |
@Test public void testProcessOnce() throws Exception { String testBasePath = hdfsURI + "/" + UUID.randomUUID() + "/"; final OneShotLatch latch = new OneShotLatch(); // create a single file in the directory Tuple2<org.apache.hadoop.fs.Path, String> bootstrap = createFileAndFillWithData(testBasePath, "file", NO_OF_FILES + 1, "This is test line."); Assert.assertTrue(hdfs.exists(bootstrap.f0)); // the source is supposed to read only this file. final Set<String> filesToBeRead = new TreeSet<>(); filesToBeRead.add(bootstrap.f0.getName()); TextInputFormat format = new TextInputFormat(new Path(testBasePath)); format.setFilesFilter(FilePathFilter.createDefaultFilter()); final ContinuousFileMonitoringFunction<String> monitoringFunction = createTestContinuousFileMonitoringFunction(format, FileProcessingMode.PROCESS_ONCE); final FileVerifyingSourceContext context = new FileVerifyingSourceContext(latch, monitoringFunction); final Thread t = new Thread() { @Override public void run() { try { monitoringFunction.open(new Configuration()); monitoringFunction.run(context); // we would never arrive here if we were in // PROCESS_CONTINUOUSLY mode. // this will trigger the latch context.close(); } catch (Exception e) { Assert.fail(e.getMessage()); } } }; t.start(); if (!latch.isTriggered()) { latch.await(); } // create some additional files that should be processed in the case of PROCESS_CONTINUOUSLY final org.apache.hadoop.fs.Path[] filesCreated = new org.apache.hadoop.fs.Path[NO_OF_FILES]; for (int i = 0; i < NO_OF_FILES; i++) { Tuple2<org.apache.hadoop.fs.Path, String> ignoredFile = createFileAndFillWithData(testBasePath, "file", i, "This is test line."); filesCreated[i] = ignoredFile.f0; } // wait until the monitoring thread exits t.join(); Assert.assertArrayEquals(filesToBeRead.toArray(), context.getSeenFiles().toArray()); // finally delete the files created for the test. hdfs.delete(bootstrap.f0, false); for (org.apache.hadoop.fs.Path path: filesCreated) { hdfs.delete(path, false); } }
Example 12
Source File: ContinuousFileProcessingITCase.java From flink with Apache License 2.0 | 4 votes |
@Test public void testProgram() throws Exception { /* * This test checks the interplay between the monitor and the reader * and also the failExternally() functionality. To test the latter we * set the parallelism to 1 so that we have the chaining between the sink, * which throws the SuccessException to signal the end of the test, and the * reader. * */ TextInputFormat format = new TextInputFormat(new Path(hdfsURI)); format.setFilePath(hdfsURI); format.setFilesFilter(FilePathFilter.createDefaultFilter()); // create the stream execution environment with a parallelism > 1 to test final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.setParallelism(PARALLELISM); ContinuousFileMonitoringFunction<String> monitoringFunction = new ContinuousFileMonitoringFunction<>(format, FileProcessingMode.PROCESS_CONTINUOUSLY, env.getParallelism(), INTERVAL); // the monitor has always DOP 1 DataStream<TimestampedFileInputSplit> splits = env.addSource(monitoringFunction); Assert.assertEquals(1, splits.getParallelism()); TypeInformation<String> typeInfo = TypeExtractor.getInputFormatTypes(format); // the readers can be multiple DataStream<String> content = splits.transform("FileSplitReader", typeInfo, new ContinuousFileReaderOperatorFactory<>(format)); Assert.assertEquals(PARALLELISM, content.getParallelism()); // finally for the sink we set the parallelism to 1 so that we can verify the output TestingSinkFunction sink = new TestingSinkFunction(); content.addSink(sink).setParallelism(1); CompletableFuture<Void> jobFuture = new CompletableFuture<>(); new Thread(() -> { try { env.execute("ContinuousFileProcessingITCase Job."); jobFuture.complete(null); } catch (Exception e) { if (ExceptionUtils.findThrowable(e, SuccessException.class).isPresent()) { jobFuture.complete(null); } else { jobFuture.completeExceptionally(e); } } }).start(); // The modification time of the last created file. long lastCreatedModTime = Long.MIN_VALUE; // create the files to be read for (int i = 0; i < NO_OF_FILES; i++) { Tuple2<org.apache.hadoop.fs.Path, String> tmpFile; long modTime; do { // give it some time so that the files have // different modification timestamps. Thread.sleep(50); tmpFile = fillWithData(hdfsURI, "file", i, "This is test line."); modTime = hdfs.getFileStatus(tmpFile.f0).getModificationTime(); if (modTime <= lastCreatedModTime) { // delete the last created file to recreate it with a different timestamp hdfs.delete(tmpFile.f0, false); } } while (modTime <= lastCreatedModTime); lastCreatedModTime = modTime; // put the contents in the expected results list before the reader picks them // this is to guarantee that they are in before the reader finishes (avoid race conditions) expectedContents.put(i, tmpFile.f1); org.apache.hadoop.fs.Path file = new org.apache.hadoop.fs.Path(hdfsURI + "/file" + i); hdfs.rename(tmpFile.f0, file); Assert.assertTrue(hdfs.exists(file)); } jobFuture.get(); }
Example 13
Source File: ContinuousFileProcessingTest.java From flink with Apache License 2.0 | 4 votes |
@Test public void testProcessContinuously() throws Exception { String testBasePath = hdfsURI + "/" + UUID.randomUUID() + "/"; final OneShotLatch latch = new OneShotLatch(); // create a single file in the directory Tuple2<org.apache.hadoop.fs.Path, String> bootstrap = createFileAndFillWithData(testBasePath, "file", NO_OF_FILES + 1, "This is test line."); Assert.assertTrue(hdfs.exists(bootstrap.f0)); final Set<String> filesToBeRead = new TreeSet<>(); filesToBeRead.add(bootstrap.f0.getName()); TextInputFormat format = new TextInputFormat(new Path(testBasePath)); format.setFilesFilter(FilePathFilter.createDefaultFilter()); final ContinuousFileMonitoringFunction<String> monitoringFunction = createTestContinuousFileMonitoringFunction(format, FileProcessingMode.PROCESS_CONTINUOUSLY); final int totalNoOfFilesToBeRead = NO_OF_FILES + 1; // 1 for the bootstrap + NO_OF_FILES final FileVerifyingSourceContext context = new FileVerifyingSourceContext(latch, monitoringFunction, 1, totalNoOfFilesToBeRead); final Thread t = new Thread() { @Override public void run() { try { monitoringFunction.open(new Configuration()); monitoringFunction.run(context); } catch (Exception e) { Assert.fail(e.getMessage()); } } }; t.start(); if (!latch.isTriggered()) { latch.await(); } // create some additional files that will be processed in the case of PROCESS_CONTINUOUSLY final org.apache.hadoop.fs.Path[] filesCreated = new org.apache.hadoop.fs.Path[NO_OF_FILES]; for (int i = 0; i < NO_OF_FILES; i++) { Tuple2<org.apache.hadoop.fs.Path, String> file = createFileAndFillWithData(testBasePath, "file", i, "This is test line."); filesCreated[i] = file.f0; filesToBeRead.add(file.f0.getName()); } // wait until the monitoring thread exits t.join(); Assert.assertArrayEquals(filesToBeRead.toArray(), context.getSeenFiles().toArray()); // finally delete the files created for the test. hdfs.delete(bootstrap.f0, false); for (org.apache.hadoop.fs.Path path: filesCreated) { hdfs.delete(path, false); } }
Example 14
Source File: ContinuousFileProcessingTest.java From flink with Apache License 2.0 | 4 votes |
@Test public void testProcessOnce() throws Exception { String testBasePath = hdfsURI + "/" + UUID.randomUUID() + "/"; final OneShotLatch latch = new OneShotLatch(); // create a single file in the directory Tuple2<org.apache.hadoop.fs.Path, String> bootstrap = createFileAndFillWithData(testBasePath, "file", NO_OF_FILES + 1, "This is test line."); Assert.assertTrue(hdfs.exists(bootstrap.f0)); // the source is supposed to read only this file. final Set<String> filesToBeRead = new TreeSet<>(); filesToBeRead.add(bootstrap.f0.getName()); TextInputFormat format = new TextInputFormat(new Path(testBasePath)); format.setFilesFilter(FilePathFilter.createDefaultFilter()); final ContinuousFileMonitoringFunction<String> monitoringFunction = createTestContinuousFileMonitoringFunction(format, FileProcessingMode.PROCESS_ONCE); final FileVerifyingSourceContext context = new FileVerifyingSourceContext(latch, monitoringFunction); final Thread t = new Thread() { @Override public void run() { try { monitoringFunction.open(new Configuration()); monitoringFunction.run(context); // we would never arrive here if we were in // PROCESS_CONTINUOUSLY mode. // this will trigger the latch context.close(); } catch (Exception e) { Assert.fail(e.getMessage()); } } }; t.start(); if (!latch.isTriggered()) { latch.await(); } // create some additional files that should be processed in the case of PROCESS_CONTINUOUSLY final org.apache.hadoop.fs.Path[] filesCreated = new org.apache.hadoop.fs.Path[NO_OF_FILES]; for (int i = 0; i < NO_OF_FILES; i++) { Tuple2<org.apache.hadoop.fs.Path, String> ignoredFile = createFileAndFillWithData(testBasePath, "file", i, "This is test line."); filesCreated[i] = ignoredFile.f0; } // wait until the monitoring thread exits t.join(); Assert.assertArrayEquals(filesToBeRead.toArray(), context.getSeenFiles().toArray()); // finally delete the files created for the test. hdfs.delete(bootstrap.f0, false); for (org.apache.hadoop.fs.Path path: filesCreated) { hdfs.delete(path, false); } }
Example 15
Source File: ContinuousFileProcessingITCase.java From flink with Apache License 2.0 | 4 votes |
@Test public void testProgram() throws Exception { /* * This test checks the interplay between the monitor and the reader * and also the failExternally() functionality. To test the latter we * set the parallelism to 1 so that we have the chaining between the sink, * which throws the SuccessException to signal the end of the test, and the * reader. * */ TextInputFormat format = new TextInputFormat(new Path(hdfsURI)); format.setFilePath(hdfsURI); format.setFilesFilter(FilePathFilter.createDefaultFilter()); // create the stream execution environment with a parallelism > 1 to test final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.setParallelism(PARALLELISM); ContinuousFileMonitoringFunction<String> monitoringFunction = new ContinuousFileMonitoringFunction<>(format, FileProcessingMode.PROCESS_CONTINUOUSLY, env.getParallelism(), INTERVAL); // the monitor has always DOP 1 DataStream<TimestampedFileInputSplit> splits = env.addSource(monitoringFunction); Assert.assertEquals(1, splits.getParallelism()); ContinuousFileReaderOperator<String> reader = new ContinuousFileReaderOperator<>(format); TypeInformation<String> typeInfo = TypeExtractor.getInputFormatTypes(format); // the readers can be multiple DataStream<String> content = splits.transform("FileSplitReader", typeInfo, reader); Assert.assertEquals(PARALLELISM, content.getParallelism()); // finally for the sink we set the parallelism to 1 so that we can verify the output TestingSinkFunction sink = new TestingSinkFunction(); content.addSink(sink).setParallelism(1); Thread job = new Thread() { @Override public void run() { try { env.execute("ContinuousFileProcessingITCase Job."); } catch (Exception e) { Throwable th = e; for (int depth = 0; depth < 20; depth++) { if (th instanceof SuccessException) { return; } else if (th.getCause() != null) { th = th.getCause(); } else { break; } } e.printStackTrace(); Assert.fail(e.getMessage()); } } }; job.start(); // The modification time of the last created file. long lastCreatedModTime = Long.MIN_VALUE; // create the files to be read for (int i = 0; i < NO_OF_FILES; i++) { Tuple2<org.apache.hadoop.fs.Path, String> tmpFile; long modTime; do { // give it some time so that the files have // different modification timestamps. Thread.sleep(50); tmpFile = fillWithData(hdfsURI, "file", i, "This is test line."); modTime = hdfs.getFileStatus(tmpFile.f0).getModificationTime(); if (modTime <= lastCreatedModTime) { // delete the last created file to recreate it with a different timestamp hdfs.delete(tmpFile.f0, false); } } while (modTime <= lastCreatedModTime); lastCreatedModTime = modTime; // put the contents in the expected results list before the reader picks them // this is to guarantee that they are in before the reader finishes (avoid race conditions) expectedContents.put(i, tmpFile.f1); org.apache.hadoop.fs.Path file = new org.apache.hadoop.fs.Path(hdfsURI + "/file" + i); hdfs.rename(tmpFile.f0, file); Assert.assertTrue(hdfs.exists(file)); } // wait for the job to finish. job.join(); }
Example 16
Source File: ContinuousFileProcessingTest.java From Flink-CEPplus with Apache License 2.0 | 4 votes |
@Test public void testProcessContinuously() throws Exception { String testBasePath = hdfsURI + "/" + UUID.randomUUID() + "/"; final OneShotLatch latch = new OneShotLatch(); // create a single file in the directory Tuple2<org.apache.hadoop.fs.Path, String> bootstrap = createFileAndFillWithData(testBasePath, "file", NO_OF_FILES + 1, "This is test line."); Assert.assertTrue(hdfs.exists(bootstrap.f0)); final Set<String> filesToBeRead = new TreeSet<>(); filesToBeRead.add(bootstrap.f0.getName()); TextInputFormat format = new TextInputFormat(new Path(testBasePath)); format.setFilesFilter(FilePathFilter.createDefaultFilter()); final ContinuousFileMonitoringFunction<String> monitoringFunction = createTestContinuousFileMonitoringFunction(format, FileProcessingMode.PROCESS_CONTINUOUSLY); final int totalNoOfFilesToBeRead = NO_OF_FILES + 1; // 1 for the bootstrap + NO_OF_FILES final FileVerifyingSourceContext context = new FileVerifyingSourceContext(latch, monitoringFunction, 1, totalNoOfFilesToBeRead); final Thread t = new Thread() { @Override public void run() { try { monitoringFunction.open(new Configuration()); monitoringFunction.run(context); } catch (Exception e) { Assert.fail(e.getMessage()); } } }; t.start(); if (!latch.isTriggered()) { latch.await(); } // create some additional files that will be processed in the case of PROCESS_CONTINUOUSLY final org.apache.hadoop.fs.Path[] filesCreated = new org.apache.hadoop.fs.Path[NO_OF_FILES]; for (int i = 0; i < NO_OF_FILES; i++) { Tuple2<org.apache.hadoop.fs.Path, String> file = createFileAndFillWithData(testBasePath, "file", i, "This is test line."); filesCreated[i] = file.f0; filesToBeRead.add(file.f0.getName()); } // wait until the monitoring thread exits t.join(); Assert.assertArrayEquals(filesToBeRead.toArray(), context.getSeenFiles().toArray()); // finally delete the files created for the test. hdfs.delete(bootstrap.f0, false); for (org.apache.hadoop.fs.Path path: filesCreated) { hdfs.delete(path, false); } }
Example 17
Source File: ContinuousFileProcessingTest.java From Flink-CEPplus with Apache License 2.0 | 4 votes |
@Test public void testProcessOnce() throws Exception { String testBasePath = hdfsURI + "/" + UUID.randomUUID() + "/"; final OneShotLatch latch = new OneShotLatch(); // create a single file in the directory Tuple2<org.apache.hadoop.fs.Path, String> bootstrap = createFileAndFillWithData(testBasePath, "file", NO_OF_FILES + 1, "This is test line."); Assert.assertTrue(hdfs.exists(bootstrap.f0)); // the source is supposed to read only this file. final Set<String> filesToBeRead = new TreeSet<>(); filesToBeRead.add(bootstrap.f0.getName()); TextInputFormat format = new TextInputFormat(new Path(testBasePath)); format.setFilesFilter(FilePathFilter.createDefaultFilter()); final ContinuousFileMonitoringFunction<String> monitoringFunction = createTestContinuousFileMonitoringFunction(format, FileProcessingMode.PROCESS_ONCE); final FileVerifyingSourceContext context = new FileVerifyingSourceContext(latch, monitoringFunction); final Thread t = new Thread() { @Override public void run() { try { monitoringFunction.open(new Configuration()); monitoringFunction.run(context); // we would never arrive here if we were in // PROCESS_CONTINUOUSLY mode. // this will trigger the latch context.close(); } catch (Exception e) { Assert.fail(e.getMessage()); } } }; t.start(); if (!latch.isTriggered()) { latch.await(); } // create some additional files that should be processed in the case of PROCESS_CONTINUOUSLY final org.apache.hadoop.fs.Path[] filesCreated = new org.apache.hadoop.fs.Path[NO_OF_FILES]; for (int i = 0; i < NO_OF_FILES; i++) { Tuple2<org.apache.hadoop.fs.Path, String> ignoredFile = createFileAndFillWithData(testBasePath, "file", i, "This is test line."); filesCreated[i] = ignoredFile.f0; } // wait until the monitoring thread exits t.join(); Assert.assertArrayEquals(filesToBeRead.toArray(), context.getSeenFiles().toArray()); // finally delete the files created for the test. hdfs.delete(bootstrap.f0, false); for (org.apache.hadoop.fs.Path path: filesCreated) { hdfs.delete(path, false); } }
Example 18
Source File: StreamExecutionEnvironment.java From flink with Apache License 2.0 | 3 votes |
/** * Reads the given file line-by-line and creates a data stream that contains a string with the * contents of each such line. The {@link java.nio.charset.Charset} with the given name will be * used to read the files. * * <p><b>NOTES ON CHECKPOINTING: </b> The source monitors the path, creates the * {@link org.apache.flink.core.fs.FileInputSplit FileInputSplits} to be processed, * forwards them to the downstream {@link ContinuousFileReaderOperator readers} to read the actual data, * and exits, without waiting for the readers to finish reading. This implies that no more checkpoint * barriers are going to be forwarded after the source exits, thus having no checkpoints after that point. * * @param filePath * The path of the file, as a URI (e.g., "file:///some/local/file" or "hdfs://host:port/file/path") * @param charsetName * The name of the character set used to read the file * @return The data stream that represents the data read from the given file as text lines */ public DataStreamSource<String> readTextFile(String filePath, String charsetName) { Preconditions.checkArgument(!StringUtils.isNullOrWhitespaceOnly(filePath), "The file path must not be null or blank."); TextInputFormat format = new TextInputFormat(new Path(filePath)); format.setFilesFilter(FilePathFilter.createDefaultFilter()); TypeInformation<String> typeInfo = BasicTypeInfo.STRING_TYPE_INFO; format.setCharsetName(charsetName); return readFile(format, filePath, FileProcessingMode.PROCESS_ONCE, -1, typeInfo); }
Example 19
Source File: StreamExecutionEnvironment.java From Flink-CEPplus with Apache License 2.0 | 3 votes |
/** * Reads the given file line-by-line and creates a data stream that contains a string with the * contents of each such line. The {@link java.nio.charset.Charset} with the given name will be * used to read the files. * * <p><b>NOTES ON CHECKPOINTING: </b> The source monitors the path, creates the * {@link org.apache.flink.core.fs.FileInputSplit FileInputSplits} to be processed, * forwards them to the downstream {@link ContinuousFileReaderOperator readers} to read the actual data, * and exits, without waiting for the readers to finish reading. This implies that no more checkpoint * barriers are going to be forwarded after the source exits, thus having no checkpoints after that point. * * @param filePath * The path of the file, as a URI (e.g., "file:///some/local/file" or "hdfs://host:port/file/path") * @param charsetName * The name of the character set used to read the file * @return The data stream that represents the data read from the given file as text lines */ public DataStreamSource<String> readTextFile(String filePath, String charsetName) { Preconditions.checkArgument(!StringUtils.isNullOrWhitespaceOnly(filePath), "The file path must not be null or blank."); TextInputFormat format = new TextInputFormat(new Path(filePath)); format.setFilesFilter(FilePathFilter.createDefaultFilter()); TypeInformation<String> typeInfo = BasicTypeInfo.STRING_TYPE_INFO; format.setCharsetName(charsetName); return readFile(format, filePath, FileProcessingMode.PROCESS_ONCE, -1, typeInfo); }
Example 20
Source File: StreamExecutionEnvironment.java From flink with Apache License 2.0 | 3 votes |
/** * Reads the given file line-by-line and creates a data stream that contains a string with the * contents of each such line. The {@link java.nio.charset.Charset} with the given name will be * used to read the files. * * <p><b>NOTES ON CHECKPOINTING: </b> The source monitors the path, creates the * {@link org.apache.flink.core.fs.FileInputSplit FileInputSplits} to be processed, * forwards them to the downstream readers to read the actual data, * and exits, without waiting for the readers to finish reading. This implies that no more checkpoint * barriers are going to be forwarded after the source exits, thus having no checkpoints after that point. * * @param filePath * The path of the file, as a URI (e.g., "file:///some/local/file" or "hdfs://host:port/file/path") * @param charsetName * The name of the character set used to read the file * @return The data stream that represents the data read from the given file as text lines */ public DataStreamSource<String> readTextFile(String filePath, String charsetName) { Preconditions.checkArgument(!StringUtils.isNullOrWhitespaceOnly(filePath), "The file path must not be null or blank."); TextInputFormat format = new TextInputFormat(new Path(filePath)); format.setFilesFilter(FilePathFilter.createDefaultFilter()); TypeInformation<String> typeInfo = BasicTypeInfo.STRING_TYPE_INFO; format.setCharsetName(charsetName); return readFile(format, filePath, FileProcessingMode.PROCESS_ONCE, -1, typeInfo); }