parquet.io.api.RecordConsumer Java Examples
The following examples show how to use
parquet.io.api.RecordConsumer.
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example #1
Source File: PentahoParquetWriteSupport.java From pentaho-hadoop-shims with Apache License 2.0 | 6 votes |
public void writeRow( RowMetaAndData row, RecordConsumer consumer ) { consumer.startMessage(); int index = 0; for ( IParquetOutputField f : outputFields ) { if ( f.getFormatFieldName() == null ) { continue; } try { writeField( f, index, row, consumer ); index++; } catch ( KettleValueException ex ) { throw new RuntimeException( ex ); } } consumer.endMessage(); }
Example #2
Source File: TupleWriter.java From hadoop-etl-udfs with MIT License | 4 votes |
public TupleWriter(RecordConsumer recordConsumer, GroupType schema) { this.recordConsumer = recordConsumer; this.schema = schema; }
Example #3
Source File: TupleWriteSupport.java From hadoop-etl-udfs with MIT License | 4 votes |
public void prepareForWrite(RecordConsumer recordConsumer) { writer = new TupleWriter(recordConsumer, schema); }
Example #4
Source File: ParquetGroup.java From incubator-gobblin with Apache License 2.0 | 4 votes |
public void writeValue(int field, int index, RecordConsumer recordConsumer) { ((Primitive) this.getValue(field, index)).writeValue(recordConsumer); }
Example #5
Source File: PentahoParquetWriteSupport.java From pentaho-hadoop-shims with Apache License 2.0 | 4 votes |
@Override public void prepareForWrite( RecordConsumer recordConsumer ) { consumer = recordConsumer; }