htsjdk.tribble.readers.AsciiLineReader Java Examples
The following examples show how to use
htsjdk.tribble.readers.AsciiLineReader.
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example #1
Source File: FindMendelianViolationsTest.java From picard with MIT License | 6 votes |
/** returns the number of lines in the file that contain a regular expression (decorated with "MV=" and * expected to be in an INFO field in a vcf) * * @param file File to examine * @param regex String containing a regular expression to look for in the file * @return the number of lines that contain regex */ private int grep(final File file, final String regex) { int results = 0; final Pattern pattern = Pattern.compile(".*"+regex+".*"); try (final LineIteratorImpl li = new LineIteratorImpl(new AsciiLineReader(IOUtil.openFileForReading(file)))) { while (li.hasNext()) { final String line = li.next(); if (pattern.matcher(line).matches()) { results++; } } } catch (final IOException e) { e.printStackTrace(); } return results; }
Example #2
Source File: VCFRecordWriter.java From Hadoop-BAM with MIT License | 6 votes |
/** A VCFHeader is read from the input Path. */ public VCFRecordWriter( Path output, Path input, boolean writeHeader, TaskAttemptContext ctx) throws IOException { final AsciiLineReader r = new AsciiLineReader( input.getFileSystem(ctx.getConfiguration()).open(input)); final FeatureCodecHeader h = codec.readHeader(new AsciiLineReaderIterator(r)); if (h == null || !(h.getHeaderValue() instanceof VCFHeader)) throw new IOException("No VCF header found in "+ input); r.close(); init(output, (VCFHeader) h.getHeaderValue(), writeHeader, ctx); }
Example #3
Source File: SimpleReference.java From varsim with BSD 2-Clause "Simplified" License | 5 votes |
public long getNumNonNBases(final File regions) throws IOException { loadAllSequences(); long count = 0; final FeatureCodec<BEDFeature, LineIterator> bedCodec = new BEDCodec(BEDCodec.StartOffset.ONE); final LineIterator lineIterator = new AsciiLineReaderIterator(new AsciiLineReader(new FileInputStream(regions))); while (lineIterator.hasNext()) { final BEDFeature bedFeature = bedCodec.decode(lineIterator); count += data.get(new ChrString(bedFeature.getContig())).getNumNonNBases(bedFeature.getStart(), bedFeature.getEnd()); } return count; }
Example #4
Source File: LocatableXsvFuncotationFactory.java From gatk with BSD 3-Clause "New" or "Revised" License | 4 votes |
/** * Set the field names that this {@link LocatableXsvFuncotationFactory} can create. * Does so by reading the headers of backing data files for this {@link LocatableXsvFuncotationFactory}. * @param inputDataFilePath {@link Path} to a backing data file from which annotations can be made for this {@link LocatableXsvFuncotationFactory}. Must not be {@code null}. */ public void setSupportedFuncotationFields(final Path inputDataFilePath) { Utils.nonNull(inputDataFilePath); if ( supportedFieldNames == null ) { synchronized ( this ) { if ( supportedFieldNames == null ) { // Approximate / arbitrary starting size: supportedFieldNames = new LinkedHashSet<>(10); // Set up a codec here to read the config file. // We have to call canDecode to set up the internal state of the XsvLocatableTableCodec: final XsvLocatableTableCodec codec = new XsvLocatableTableCodec(); try { if ( !codec.canDecode(mainSourceFileAsFeatureInput.getFeaturePath()) ) { // This should never happen because we have already validated this config file by the time we // reach here: throw new GATKException.ShouldNeverReachHereException("Could not decode from data file: " + mainSourceFileAsFeatureInput.getFeaturePath()); } } catch ( final NullPointerException ex ) { // This should never happen because we have already validated this config file by the time we // reach here: throw new GATKException.ShouldNeverReachHereException("Could not decode from data file! Has not been set yet!"); } // Get the info from our path: final List<String> columnNames; try (final InputStream fileInputStream = Files.newInputStream(inputDataFilePath)) { final AsciiLineReaderIterator lineReaderIterator = new AsciiLineReaderIterator(AsciiLineReader.from(fileInputStream)); codec.readActualHeader(lineReaderIterator); columnNames = codec.getHeaderWithoutLocationColumns(); } catch (final IOException ioe) { throw new UserException.BadInput("Could not read header from data file: " + inputDataFilePath.toUri().toString(), ioe); } // Make sure we actually read the header: if ( columnNames == null ) { throw new UserException.MalformedFile("Could not decode from data file: " + inputDataFilePath.toUri().toString()); } supportedFieldNames.addAll(columnNames); // Initialize our field name lists: initializeFieldNameLists(); // Adjust the manual annotations to make sure we don't try to annotate any fields we aren't // responsible for: annotationOverrideMap.entrySet().removeIf( e -> !supportedFieldNames.contains(e.getKey()) ); } } } }