parquet.hadoop.api.InitContext Java Examples
The following examples show how to use
parquet.hadoop.api.InitContext.
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example #1
Source File: PentahoParquetReadSupport.java From pentaho-hadoop-shims with Apache License 2.0 | 6 votes |
@Override public ReadContext init( InitContext context ) { String schemaStr = context.getConfiguration().get( ParquetConverter.PARQUET_SCHEMA_CONF_KEY ); if ( schemaStr == null ) { throw new RuntimeException( "Schema not defined in the PentahoParquetSchema key" ); } ParquetInputFieldList schema = ParquetInputFieldList.unmarshall( schemaStr ); converter = new ParquetConverter( schema.getFields() ); // get all fields from file's schema MessageType fileSchema = context.getFileSchema(); List<Type> newFields = new ArrayList<>(); // use only required fields for ( IParquetInputField f : schema ) { Type origField = fileSchema.getFields().get( fileSchema.getFieldIndex( f.getFormatFieldName() ) ); newFields.add( origField ); } if ( newFields.isEmpty() ) { throw new RuntimeException( "Fields should be declared" ); } MessageType newSchema = new MessageType( fileSchema.getName(), newFields ); return new ReadContext( newSchema, new HashMap<>() ); }
Example #2
Source File: ParquetHdfsDataWriterTest.java From incubator-gobblin with Apache License 2.0 | 4 votes |
@Override public ReadContext init(InitContext context) { return new ReadContext(context.getFileSchema()); }
Example #3
Source File: SimpleReadSupport.java From parquet-tools with Apache License 2.0 | 4 votes |
@Override public ReadContext init(InitContext context) { return new ReadContext(context.getFileSchema()); }