Java Code Examples for org.apache.flink.api.java.io.TextOutputFormat#setWriteMode()
The following examples show how to use
org.apache.flink.api.java.io.TextOutputFormat#setWriteMode() .
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: HDFSTest.java From Flink-CEPplus with Apache License 2.0 | 5 votes |
@Test public void testChangingFileNames() { org.apache.hadoop.fs.Path hdfsPath = new org.apache.hadoop.fs.Path(hdfsURI + "/hdfsTest"); Path path = new Path(hdfsPath.toString()); String type = "one"; TextOutputFormat<String> outputFormat = new TextOutputFormat<>(path); outputFormat.setWriteMode(FileSystem.WriteMode.NO_OVERWRITE); outputFormat.setOutputDirectoryMode(FileOutputFormat.OutputDirectoryMode.ALWAYS); try { outputFormat.open(0, 2); outputFormat.writeRecord(type); outputFormat.close(); outputFormat.open(1, 2); outputFormat.writeRecord(type); outputFormat.close(); assertTrue("No result file present", hdfs.exists(hdfsPath)); FileStatus[] files = hdfs.listStatus(hdfsPath); Assert.assertEquals(2, files.length); for (FileStatus file : files) { assertTrue("1".equals(file.getPath().getName()) || "2".equals(file.getPath().getName())); } } catch (IOException e) { e.printStackTrace(); Assert.fail(e.getMessage()); } }
Example 2
Source File: HDFSTest.java From flink with Apache License 2.0 | 5 votes |
@Test public void testChangingFileNames() { org.apache.hadoop.fs.Path hdfsPath = new org.apache.hadoop.fs.Path(hdfsURI + "/hdfsTest"); Path path = new Path(hdfsPath.toString()); String type = "one"; TextOutputFormat<String> outputFormat = new TextOutputFormat<>(path); outputFormat.setWriteMode(FileSystem.WriteMode.NO_OVERWRITE); outputFormat.setOutputDirectoryMode(FileOutputFormat.OutputDirectoryMode.ALWAYS); try { outputFormat.open(0, 2); outputFormat.writeRecord(type); outputFormat.close(); outputFormat.open(1, 2); outputFormat.writeRecord(type); outputFormat.close(); assertTrue("No result file present", hdfs.exists(hdfsPath)); FileStatus[] files = hdfs.listStatus(hdfsPath); Assert.assertEquals(2, files.length); for (FileStatus file : files) { assertTrue("1".equals(file.getPath().getName()) || "2".equals(file.getPath().getName())); } } catch (IOException e) { e.printStackTrace(); Assert.fail(e.getMessage()); } }
Example 3
Source File: HDFSTest.java From flink with Apache License 2.0 | 5 votes |
@Test public void testChangingFileNames() { org.apache.hadoop.fs.Path hdfsPath = new org.apache.hadoop.fs.Path(hdfsURI + "/hdfsTest"); Path path = new Path(hdfsPath.toString()); String type = "one"; TextOutputFormat<String> outputFormat = new TextOutputFormat<>(path); outputFormat.setWriteMode(FileSystem.WriteMode.NO_OVERWRITE); outputFormat.setOutputDirectoryMode(FileOutputFormat.OutputDirectoryMode.ALWAYS); try { outputFormat.open(0, 2); outputFormat.writeRecord(type); outputFormat.close(); outputFormat.open(1, 2); outputFormat.writeRecord(type); outputFormat.close(); assertTrue("No result file present", hdfs.exists(hdfsPath)); FileStatus[] files = hdfs.listStatus(hdfsPath); Assert.assertEquals(2, files.length); for (FileStatus file : files) { assertTrue("1".equals(file.getPath().getName()) || "2".equals(file.getPath().getName())); } } catch (IOException e) { e.printStackTrace(); Assert.fail(e.getMessage()); } }
Example 4
Source File: DataStream.java From Flink-CEPplus with Apache License 2.0 | 3 votes |
/** * Writes a DataStream to the file specified by path in text format. * * <p>For every element of the DataStream the result of {@link Object#toString()} is written. * * @param path * The path pointing to the location the text file is written to * @param writeMode * Controls the behavior for existing files. Options are * NO_OVERWRITE and OVERWRITE. * * @return The closed DataStream. */ @PublicEvolving public DataStreamSink<T> writeAsText(String path, WriteMode writeMode) { TextOutputFormat<T> tof = new TextOutputFormat<>(new Path(path)); tof.setWriteMode(writeMode); return writeUsingOutputFormat(tof); }
Example 5
Source File: DataStream.java From flink with Apache License 2.0 | 3 votes |
/** * Writes a DataStream to the file specified by path in text format. * * <p>For every element of the DataStream the result of {@link Object#toString()} is written. * * @param path * The path pointing to the location the text file is written to * @param writeMode * Controls the behavior for existing files. Options are * NO_OVERWRITE and OVERWRITE. * * @return The closed DataStream. */ @PublicEvolving public DataStreamSink<T> writeAsText(String path, WriteMode writeMode) { TextOutputFormat<T> tof = new TextOutputFormat<>(new Path(path)); tof.setWriteMode(writeMode); return writeUsingOutputFormat(tof); }
Example 6
Source File: Calculator.java From OSTMap with Apache License 2.0 | 3 votes |
/** * run area calculation process * @param path path to config file * @throws Exception */ public void run(String path) throws Exception { readConfig(path); FlinkEnvManager fem = new FlinkEnvManager(path, "areaJob", TableIdentifier.RAW_TWITTER_DATA.get(), "HighScore"); DataSet<Tuple2<Key,Value>> rawTwitterDataRows = fem.getDataFromAccumulo(); DataSet<Tuple2<String,String>> geoList = rawTwitterDataRows.flatMap(new GeoExtrationFlatMap()); DataSet<Tuple2<String,String>> reducedGroup = geoList .groupBy(0) .reduceGroup(new CoordGroupReduce()); DataSet<Tuple3<String,Double,Integer>> userRanking = reducedGroup.flatMap(new GeoCalcFlatMap()) .sortPartition(1, Order.DESCENDING).setParallelism(1); DataSet<Tuple2<Text,Mutation>> topTen = userRanking .groupBy(2) .reduceGroup(new TopTenGroupReduce("ac")); topTen.output(fem.getHadoopOF()); fem.getExecutionEnvironment().execute("AreaProcess"); TextOutputFormat<String> tof = new TextOutputFormat<>(new Path("file:///tmp/areauserranking")); tof.setWriteMode(FileSystem.WriteMode.OVERWRITE); userRanking.writeAsText("file:///tmp/areauserranking", FileSystem.WriteMode.OVERWRITE).setParallelism(1); fem.getExecutionEnvironment().execute("AreaCalculationProcess"); }
Example 7
Source File: PathCalculator.java From OSTMap with Apache License 2.0 | 3 votes |
/** * run area calculation process * @param path path to config file * @throws Exception */ public void run(String path) throws Exception { readConfig(path); FlinkEnvManager fem = new FlinkEnvManager(path, "pathJob", TableIdentifier.RAW_TWITTER_DATA.get(), "HighScore"); DataSet<Tuple2<Key,Value>> rawTwitterDataRows = fem.getDataFromAccumulo(); DataSet<Tuple2<String,String>> geoList = rawTwitterDataRows.flatMap(new PathGeoExtrationFlatMap()); DataSet<Tuple2<String,String>> reducedGroup = geoList .groupBy(0) .reduceGroup(new PathCoordGroupReduce()); DataSet<Tuple3<String,Double,Integer>> userRanking = reducedGroup.flatMap(new PathGeoCalcFlatMap()) .sortPartition(1, Order.DESCENDING).setParallelism(1); DataSet<Tuple2<Text,Mutation>> topTen = userRanking .groupBy(2) .reduceGroup(new TopTenGroupReduce("td")); topTen.output(fem.getHadoopOF()); fem.getExecutionEnvironment().execute("PathProcess"); TextOutputFormat<String> tof = new TextOutputFormat<>(new Path("file:///tmp/pathuserranking")); tof.setWriteMode(FileSystem.WriteMode.OVERWRITE); userRanking.writeAsText("file:///tmp/pathuserranking", FileSystem.WriteMode.OVERWRITE).setParallelism(1); fem.getExecutionEnvironment().execute("PathCalculationProcess"); }
Example 8
Source File: DataStream.java From flink with Apache License 2.0 | 3 votes |
/** * Writes a DataStream to the file specified by path in text format. * * <p>For every element of the DataStream the result of {@link Object#toString()} is written. * * @param path * The path pointing to the location the text file is written to * @param writeMode * Controls the behavior for existing files. Options are * NO_OVERWRITE and OVERWRITE. * * @return The closed DataStream. * * @deprecated Please use the {@link org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink} explicitly using the * {@link #addSink(SinkFunction)} method. */ @Deprecated @PublicEvolving public DataStreamSink<T> writeAsText(String path, WriteMode writeMode) { TextOutputFormat<T> tof = new TextOutputFormat<>(new Path(path)); tof.setWriteMode(writeMode); return writeUsingOutputFormat(tof); }
Example 9
Source File: DataSet.java From Flink-CEPplus with Apache License 2.0 | 2 votes |
/** * Writes a DataSet as text file(s) to the specified location. * * <p>For each element of the DataSet the result of {@link Object#toString()} is written. * * @param filePath The path pointing to the location the text file is written to. * @param writeMode Control the behavior for existing files. Options are NO_OVERWRITE and OVERWRITE. * @return The DataSink that writes the DataSet. * * @see TextOutputFormat * @see DataSet#writeAsText(String) Output files and directories */ public DataSink<T> writeAsText(String filePath, WriteMode writeMode) { TextOutputFormat<T> tof = new TextOutputFormat<>(new Path(filePath)); tof.setWriteMode(writeMode); return output(tof); }
Example 10
Source File: DataSet.java From flink with Apache License 2.0 | 2 votes |
/** * Writes a DataSet as text file(s) to the specified location. * * <p>For each element of the DataSet the result of {@link Object#toString()} is written. * * @param filePath The path pointing to the location the text file is written to. * @param writeMode Control the behavior for existing files. Options are NO_OVERWRITE and OVERWRITE. * @return The DataSink that writes the DataSet. * * @see TextOutputFormat * @see DataSet#writeAsText(String) Output files and directories */ public DataSink<T> writeAsText(String filePath, WriteMode writeMode) { TextOutputFormat<T> tof = new TextOutputFormat<>(new Path(filePath)); tof.setWriteMode(writeMode); return output(tof); }
Example 11
Source File: DataSet.java From flink with Apache License 2.0 | 2 votes |
/** * Writes a DataSet as text file(s) to the specified location. * * <p>For each element of the DataSet the result of {@link Object#toString()} is written. * * @param filePath The path pointing to the location the text file is written to. * @param writeMode Control the behavior for existing files. Options are NO_OVERWRITE and OVERWRITE. * @return The DataSink that writes the DataSet. * * @see TextOutputFormat * @see DataSet#writeAsText(String) Output files and directories */ public DataSink<T> writeAsText(String filePath, WriteMode writeMode) { TextOutputFormat<T> tof = new TextOutputFormat<>(new Path(filePath)); tof.setWriteMode(writeMode); return output(tof); }