org.datavec.api.writable.WritableType Java Examples
The following examples show how to use
org.datavec.api.writable.WritableType.
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example #1
Source File: Comparators.java From deeplearning4j with Apache License 2.0 | 5 votes |
public static Comparator<Writable> forType(WritableType type, boolean ascending){ Comparator<Writable> c; switch (type){ case Byte: case Int: c = new IntWritableComparator(); break; case Double: c = new DoubleWritableComparator(); break; case Float: c = new FloatWritableComparator(); break; case Long: c = new LongWritableComparator(); break; case Text: c = new TextWritableComparator(); break; case Boolean: case NDArray: case Image: case Null: default: throw new UnsupportedOperationException("No built-in comparator for writable type: " + type); } if(ascending){ return c; } return new ReverseComparator<>(c); }
Example #2
Source File: ConvertToInteger.java From DataVec with Apache License 2.0 | 5 votes |
@Override public IntWritable map(Writable writable) { if(writable.getType() == WritableType.Int){ return (IntWritable)writable; } return new IntWritable(writable.toInt()); }
Example #3
Source File: ConvertToDouble.java From DataVec with Apache License 2.0 | 5 votes |
@Override public DoubleWritable map(Writable writable) { if(writable.getType() == WritableType.Double){ return (DoubleWritable)writable; } return new DoubleWritable(writable.toDouble()); }
Example #4
Source File: Comparators.java From DataVec with Apache License 2.0 | 5 votes |
public static Comparator<Writable> forType(WritableType type, boolean ascending){ Comparator<Writable> c; switch (type){ case Byte: case Int: c = new IntWritableComparator(); break; case Double: c = new DoubleWritableComparator(); break; case Float: c = new FloatWritableComparator(); break; case Long: c = new LongWritableComparator(); break; case Text: c = new TextWritableComparator(); break; case Boolean: case NDArray: case Image: case Null: default: throw new UnsupportedOperationException("No built-in comparator for writable type: " + type); } if(ascending){ return c; } return new ReverseComparator<>(c); }
Example #5
Source File: ConvertToDouble.java From deeplearning4j with Apache License 2.0 | 5 votes |
@Override public DoubleWritable map(Writable writable) { if(writable.getType() == WritableType.Double){ return (DoubleWritable)writable; } return new DoubleWritable(writable.toDouble()); }
Example #6
Source File: ConvertToFloat.java From deeplearning4j with Apache License 2.0 | 5 votes |
@Override public FloatWritable map(Writable writable) { if(writable.getType() == WritableType.Double){ return (FloatWritable)writable; } return new FloatWritable(writable.toFloat()); }
Example #7
Source File: ConvertToInteger.java From deeplearning4j with Apache License 2.0 | 5 votes |
@Override public IntWritable map(Writable writable) { if(writable.getType() == WritableType.Int){ return (IntWritable)writable; } return new IntWritable(writable.toInt()); }
Example #8
Source File: InferenceExecutionerStepRunner.java From konduit-serving with Apache License 2.0 | 5 votes |
private boolean allNdArray(Record[] records) { boolean isAllNdArrays = true; for (Record record : records) { if (record.getRecord().size() != 1 && record.getRecord().get(0).getType() != WritableType.NDArray) { isAllNdArrays = false; break; } } return isAllNdArrays; }
Example #9
Source File: ImageWritable.java From deeplearning4j with Apache License 2.0 | 4 votes |
@Override public void writeType(DataOutput out) throws IOException { out.writeShort(WritableType.Image.typeIdx()); }
Example #10
Source File: Configuration.java From deeplearning4j with Apache License 2.0 | 4 votes |
@Override public WritableType getType() { throw new UnsupportedOperationException(); }
Example #11
Source File: Comparators.java From deeplearning4j with Apache License 2.0 | 4 votes |
public static Comparator<Writable> forType(WritableType type) { return forType(type, true); }
Example #12
Source File: BaseInputFormat.java From deeplearning4j with Apache License 2.0 | 4 votes |
@Override public WritableType getType(){ throw new UnsupportedOperationException(); }
Example #13
Source File: ListStringInputFormat.java From deeplearning4j with Apache License 2.0 | 4 votes |
@Override public WritableType getType() { throw new UnsupportedOperationException(); }
Example #14
Source File: ImageWritable.java From deeplearning4j with Apache License 2.0 | 4 votes |
@Override public WritableType getType() { return WritableType.Image; }
Example #15
Source File: ImageWritable.java From DataVec with Apache License 2.0 | 4 votes |
@Override public void writeType(DataOutput out) throws IOException { out.writeShort(WritableType.Image.typeIdx()); }
Example #16
Source File: ImageWritable.java From DataVec with Apache License 2.0 | 4 votes |
@Override public WritableType getType() { return WritableType.Image; }
Example #17
Source File: ListStringInputFormat.java From DataVec with Apache License 2.0 | 4 votes |
@Override public WritableType getType() { throw new UnsupportedOperationException(); }
Example #18
Source File: BaseInputFormat.java From DataVec with Apache License 2.0 | 4 votes |
@Override public WritableType getType(){ throw new UnsupportedOperationException(); }
Example #19
Source File: Comparators.java From DataVec with Apache License 2.0 | 4 votes |
public static Comparator<Writable> forType(WritableType type) { return forType(type, true); }
Example #20
Source File: Configuration.java From DataVec with Apache License 2.0 | 4 votes |
@Override public WritableType getType() { throw new UnsupportedOperationException(); }
Example #21
Source File: MapFileSequenceRecordWriter.java From deeplearning4j with Apache License 2.0 | 2 votes |
/** * * @param outputDir Output directory for the map file(s) * @param convertTextTo If null: Make no changes to Text writable objects. If non-null, Text writable instances * will be converted to this type. This is useful, when would rather store numerical values * even if the original record reader produces strings/text. */ public MapFileSequenceRecordWriter(@NonNull File outputDir, WritableType convertTextTo) { this(outputDir, DEFAULT_MAP_FILE_SPLIT_SIZE, convertTextTo); }
Example #22
Source File: MapFileRecordWriter.java From DataVec with Apache License 2.0 | 2 votes |
/** * * @param outputDir Output directory for the map file(s) * @param convertTextTo If null: Make no changes to Text writable objects. If non-null, Text writable instances * will be converted to this type. This is useful, when would rather store numerical values * even if the original record reader produces strings/text. */ public MapFileRecordWriter(@NonNull File outputDir, WritableType convertTextTo) { this(outputDir, DEFAULT_MAP_FILE_SPLIT_SIZE, convertTextTo); }
Example #23
Source File: MapFileRecordWriter.java From DataVec with Apache License 2.0 | 2 votes |
/** * * @param outputDir Output directory for the map file(s) * @param mapFileSplitSize Split size for the map file: if 0, use a single map file for all output. If > 0, * multiple map files will be used: each will contain a maximum of mapFileSplitSize * examples. This can be used to avoid having a single multi gigabyte map file, which may * be undesirable in some cases (transfer across the network, for example). * @param convertTextTo If null: Make no changes to Text writable objects. If non-null, Text writable instances * will be converted to this type. This is useful, when would rather store numerical values * even if the original record reader produces strings/text. */ public MapFileRecordWriter(@NonNull File outputDir, int mapFileSplitSize, WritableType convertTextTo) { super(outputDir, mapFileSplitSize, convertTextTo); }
Example #24
Source File: MapFileSequenceRecordWriter.java From deeplearning4j with Apache License 2.0 | 2 votes |
/** * * @param outputDir Output directory for the map file(s) * @param mapFileSplitSize Split size for the map file: if 0, use a single map file for all output. If > 0, * multiple map files will be used: each will contain a maximum of mapFileSplitSize * examples. This can be used to avoid having a single multi gigabyte map file, which may * be undesirable in some cases (transfer across the network, for example). * @param convertTextTo If null: Make no changes to Text writable objects. If non-null, Text writable instances * will be converted to this type. This is useful, when would rather store numerical values * even if the original record reader produces strings/text. * @param indexInterval Index interval for the Map file. Defaults to 1, which is suitable for most cases * @param filenamePattern The naming pattern for the map files. Used with String.format(pattern, int) * @param hadoopConfiguration Hadoop configuration. */ public MapFileSequenceRecordWriter(@NonNull File outputDir, int mapFileSplitSize, WritableType convertTextTo, int indexInterval, String filenamePattern, org.apache.hadoop.conf.Configuration hadoopConfiguration) { super(outputDir, mapFileSplitSize, convertTextTo, indexInterval, filenamePattern, hadoopConfiguration); }
Example #25
Source File: MapFileSequenceRecordWriter.java From deeplearning4j with Apache License 2.0 | 2 votes |
/** * * @param outputDir Output directory for the map file(s) * @param mapFileSplitSize Split size for the map file: if 0, use a single map file for all output. If > 0, * multiple map files will be used: each will contain a maximum of mapFileSplitSize * examples. This can be used to avoid having a single multi gigabyte map file, which may * be undesirable in some cases (transfer across the network, for example). * @param convertTextTo If null: Make no changes to Text writable objects. If non-null, Text writable instances * will be converted to this type. This is useful, when would rather store numerical values * even if the original record reader produces strings/text. * @param indexInterval Index interval for the Map file. Defaults to 1, which is suitable for most cases * @param hadoopConfiguration Hadoop configuration. */ public MapFileSequenceRecordWriter(@NonNull File outputDir, int mapFileSplitSize, WritableType convertTextTo, int indexInterval, org.apache.hadoop.conf.Configuration hadoopConfiguration) { super(outputDir, mapFileSplitSize, convertTextTo, indexInterval, hadoopConfiguration); }
Example #26
Source File: MapFileSequenceRecordWriter.java From deeplearning4j with Apache License 2.0 | 2 votes |
/** * * @param outputDir Output directory for the map file(s) * @param mapFileSplitSize Split size for the map file: if 0, use a single map file for all output. If > 0, * multiple map files will be used: each will contain a maximum of mapFileSplitSize * examples. This can be used to avoid having a single multi gigabyte map file, which may * be undesirable in some cases (transfer across the network, for example). * @param convertTextTo If null: Make no changes to Text writable objects. If non-null, Text writable instances * will be converted to this type. This is useful, when would rather store numerical values * even if the original record reader produces strings/text. * @param hadoopConfiguration Hadoop configuration. */ public MapFileSequenceRecordWriter(@NonNull File outputDir, int mapFileSplitSize, WritableType convertTextTo, org.apache.hadoop.conf.Configuration hadoopConfiguration) { super(outputDir, mapFileSplitSize, convertTextTo, DEFAULT_INDEX_INTERVAL, hadoopConfiguration); }
Example #27
Source File: MapFileSequenceRecordWriter.java From deeplearning4j with Apache License 2.0 | 2 votes |
/** * * @param outputDir Output directory for the map file(s) * @param mapFileSplitSize Split size for the map file: if 0, use a single map file for all output. If > 0, * multiple map files will be used: each will contain a maximum of mapFileSplitSize * examples. This can be used to avoid having a single multi gigabyte map file, which may * be undesirable in some cases (transfer across the network, for example). * @param convertTextTo If null: Make no changes to Text writable objects. If non-null, Text writable instances * will be converted to this type. This is useful, when would rather store numerical values * even if the original record reader produces strings/text. */ public MapFileSequenceRecordWriter(@NonNull File outputDir, int mapFileSplitSize, WritableType convertTextTo) { super(outputDir, mapFileSplitSize, convertTextTo); }
Example #28
Source File: MapFileRecordWriter.java From DataVec with Apache License 2.0 | 2 votes |
/** * * @param outputDir Output directory for the map file(s) * @param mapFileSplitSize Split size for the map file: if 0, use a single map file for all output. If > 0, * multiple map files will be used: each will contain a maximum of mapFileSplitSize * examples. This can be used to avoid having a single multi gigabyte map file, which may * be undesirable in some cases (transfer across the network, for example). * @param convertTextTo If null: Make no changes to Text writable objects. If non-null, Text writable instances * will be converted to this type. This is useful, when would rather store numerical values * even if the original record reader produces strings/text. * @param indexInterval Index interval for the Map file. Defaults to 1, which is suitable for most cases * @param filenamePattern The naming pattern for the map files. Used with String.format(pattern, int) * @param hadoopConfiguration Hadoop configuration. */ public MapFileRecordWriter(@NonNull File outputDir, int mapFileSplitSize, WritableType convertTextTo, int indexInterval, String filenamePattern, org.apache.hadoop.conf.Configuration hadoopConfiguration) { super(outputDir, mapFileSplitSize, convertTextTo, indexInterval, filenamePattern, hadoopConfiguration); }
Example #29
Source File: MapFileRecordWriter.java From deeplearning4j with Apache License 2.0 | 2 votes |
/** * * @param outputDir Output directory for the map file(s) * @param mapFileSplitSize Split size for the map file: if 0, use a single map file for all output. If > 0, * multiple map files will be used: each will contain a maximum of mapFileSplitSize * examples. This can be used to avoid having a single multi gigabyte map file, which may * be undesirable in some cases (transfer across the network, for example). * @param convertTextTo If null: Make no changes to Text writable objects. If non-null, Text writable instances * will be converted to this type. This is useful, when would rather store numerical values * even if the original record reader produces strings/text. * @param indexInterval Index interval for the Map file. Defaults to 1, which is suitable for most cases * @param filenamePattern The naming pattern for the map files. Used with String.format(pattern, int) * @param hadoopConfiguration Hadoop configuration. */ public MapFileRecordWriter(@NonNull File outputDir, int mapFileSplitSize, WritableType convertTextTo, int indexInterval, String filenamePattern, org.apache.hadoop.conf.Configuration hadoopConfiguration) { super(outputDir, mapFileSplitSize, convertTextTo, indexInterval, filenamePattern, hadoopConfiguration); }
Example #30
Source File: MapFileRecordWriter.java From deeplearning4j with Apache License 2.0 | 2 votes |
/** * * @param outputDir Output directory for the map file(s) * @param mapFileSplitSize Split size for the map file: if 0, use a single map file for all output. If > 0, * multiple map files will be used: each will contain a maximum of mapFileSplitSize * examples. This can be used to avoid having a single multi gigabyte map file, which may * be undesirable in some cases (transfer across the network, for example). * @param convertTextTo If null: Make no changes to Text writable objects. If non-null, Text writable instances * will be converted to this type. This is useful, when would rather store numerical values * even if the original record reader produces strings/text. * @param indexInterval Index interval for the Map file. Defaults to 1, which is suitable for most cases * @param hadoopConfiguration Hadoop configuration. */ public MapFileRecordWriter(@NonNull File outputDir, int mapFileSplitSize, WritableType convertTextTo, int indexInterval, org.apache.hadoop.conf.Configuration hadoopConfiguration) { super(outputDir, mapFileSplitSize, convertTextTo, indexInterval, hadoopConfiguration); }