Java Code Examples for org.apache.parquet.io.InputFile#newStream()
The following examples show how to use
org.apache.parquet.io.InputFile#newStream() .
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: ParquetFileReader.java From parquet-mr with Apache License 2.0 | 6 votes |
public ParquetFileReader(InputFile file, ParquetReadOptions options) throws IOException { this.converter = new ParquetMetadataConverter(options); this.file = file; this.f = file.newStream(); this.options = options; try { this.footer = readFooter(file, options, f, converter); } catch (Exception e) { // In case that reading footer throws an exception in the constructor, the new stream // should be closed. Otherwise, there's no way to close this outside. f.close(); throw e; } this.fileMetaData = footer.getFileMetaData(); this.blocks = filterRowGroups(footer.getBlocks()); this.blockIndexStores = listWithNulls(this.blocks.size()); this.blockRowRanges = listWithNulls(this.blocks.size()); for (ColumnDescriptor col : footer.getFileMetaData().getSchema().getColumns()) { paths.put(ColumnPath.get(col.getPath()), col); } this.crc = options.usePageChecksumVerification() ? new CRC32() : null; }
Example 2
Source File: ParquetFileReader.java From parquet-mr with Apache License 2.0 | 5 votes |
/** * Reads the meta data block in the footer of the file using provided input stream * @param file a {@link InputFile} to read * @param filter the filter to apply to row groups * @return the metadata blocks in the footer * @throws IOException if an error occurs while reading the file * @deprecated will be removed in 2.0.0; * use {@link ParquetFileReader#open(InputFile, ParquetReadOptions)} */ @Deprecated public static final ParquetMetadata readFooter(InputFile file, MetadataFilter filter) throws IOException { ParquetReadOptions options; if (file instanceof HadoopInputFile) { options = HadoopReadOptions.builder(((HadoopInputFile) file).getConfiguration()) .withMetadataFilter(filter).build(); } else { options = ParquetReadOptions.builder().withMetadataFilter(filter).build(); } try (SeekableInputStream in = file.newStream()) { return readFooter(file, options, in); } }
Example 3
Source File: TestDataPageV1Checksums.java From parquet-mr with Apache License 2.0 | 4 votes |
/** * Test whether corruption in the page content is detected by checksum verification */ @Test public void testCorruptedPage() throws IOException { Configuration conf = new Configuration(); conf.setBoolean(ParquetOutputFormat.PAGE_WRITE_CHECKSUM_ENABLED, true); Path path = writeSimpleParquetFile(conf, CompressionCodecName.UNCOMPRESSED); InputFile inputFile = HadoopInputFile.fromPath(path, conf); try (SeekableInputStream inputStream = inputFile.newStream()) { int fileLen = (int) inputFile.getLength(); byte[] fileBytes = new byte[fileLen]; inputStream.readFully(fileBytes); inputStream.close(); // There are 4 pages in total (2 per column), we corrupt the first page of the first column // and the second page of the second column. We do this by altering a byte roughly in the // middle of each page to be corrupted fileBytes[fileLen / 8]++; fileBytes[fileLen / 8 + ((fileLen / 4) * 3)]++; OutputFile outputFile = HadoopOutputFile.fromPath(path, conf); try (PositionOutputStream outputStream = outputFile.createOrOverwrite(1024 * 1024)) { outputStream.write(fileBytes); outputStream.close(); // First we disable checksum verification, the corruption will go undetected as it is in the // data section of the page conf.setBoolean(ParquetInputFormat.PAGE_VERIFY_CHECKSUM_ENABLED, false); try (ParquetFileReader reader = getParquetFileReader(path, conf, Arrays.asList(colADesc, colBDesc))) { PageReadStore pageReadStore = reader.readNextRowGroup(); DataPageV1 colAPage1 = readNextPage(colADesc, pageReadStore); assertFalse("Data in page was not corrupted", Arrays.equals(colAPage1.getBytes().toByteArray(), colAPage1Bytes)); readNextPage(colADesc, pageReadStore); readNextPage(colBDesc, pageReadStore); DataPageV1 colBPage2 = readNextPage(colBDesc, pageReadStore); assertFalse("Data in page was not corrupted", Arrays.equals(colBPage2.getBytes().toByteArray(), colBPage2Bytes)); } // Now we enable checksum verification, the corruption should be detected conf.setBoolean(ParquetInputFormat.PAGE_VERIFY_CHECKSUM_ENABLED, true); try (ParquetFileReader reader = getParquetFileReader(path, conf, Arrays.asList(colADesc, colBDesc))) { // We expect an exception on the first encountered corrupt page (in readAllPages) assertVerificationFailed(reader); } } } }