Java Code Examples for org.apache.hadoop.hbase.CellUtil#matchingColumn()
The following examples show how to use
org.apache.hadoop.hbase.CellUtil#matchingColumn() .
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: Result.java From hbase with Apache License 2.0 | 6 votes |
/** * Return the Cells for the specific column. The Cells are sorted in * the {@link CellComparator} order. That implies the first entry in * the list is the most recent column. If the query (Scan or Get) only * requested 1 version the list will contain at most 1 entry. If the column * did not exist in the result set (either the column does not exist * or the column was not selected in the query) the list will be empty. * * Also see getColumnLatest which returns just a Cell * * @param family the family * @param qualifier * @return a list of Cells for this column or empty list if the column * did not exist in the result set */ public List<Cell> getColumnCells(byte [] family, byte [] qualifier) { List<Cell> result = new ArrayList<>(); Cell [] kvs = rawCells(); if (kvs == null || kvs.length == 0) { return result; } int pos = binarySearch(kvs, family, qualifier); if (pos == -1) { return result; // cant find it } for (int i = pos; i < kvs.length; i++) { if (CellUtil.matchingColumn(kvs[i], family,qualifier)) { result.add(kvs[i]); } else { break; } } return result; }
Example 2
Source File: DependentColumnFilter.java From hbase with Apache License 2.0 | 6 votes |
@Override public ReturnCode filterCell(final Cell c) { // Check if the column and qualifier match if (!CellUtil.matchingColumn(c, this.columnFamily, this.columnQualifier)) { // include non-matches for the time being, they'll be discarded afterwards return ReturnCode.INCLUDE; } // If it doesn't pass the op, skip it if (comparator != null && compareValue(getCompareOperator(), comparator, c)) return ReturnCode.SKIP; stampSet.add(c.getTimestamp()); if(dropDependentColumn) { return ReturnCode.SKIP; } return ReturnCode.INCLUDE; }
Example 3
Source File: SingleColumnValueFilter.java From hbase with Apache License 2.0 | 6 votes |
@Override public ReturnCode filterCell(final Cell c) { // System.out.println("REMOVE KEY=" + keyValue.toString() + ", value=" + Bytes.toString(keyValue.getValue())); if (this.matchedColumn) { // We already found and matched the single column, all keys now pass return ReturnCode.INCLUDE; } else if (this.latestVersionOnly && this.foundColumn) { // We found but did not match the single column, skip to next row return ReturnCode.NEXT_ROW; } if (!CellUtil.matchingColumn(c, this.columnFamily, this.columnQualifier)) { return ReturnCode.INCLUDE; } foundColumn = true; if (filterColumnValue(c)) { return this.latestVersionOnly? ReturnCode.NEXT_ROW: ReturnCode.INCLUDE; } this.matchedColumn = true; return ReturnCode.INCLUDE; }
Example 4
Source File: IndexRegionObserver.java From phoenix with Apache License 2.0 | 6 votes |
/** * IndexMaintainer.getIndexedColumns() returns the data column references for indexed columns. The data columns are * grouped into three classes, pk columns (data table pk columns), the indexed columns (the columns for which * we want to have indexing; they form the prefix for the primary key for the index table (after salt and tenant id)) * and covered columns. The purpose of this method is to find out if all the indexed columns are included in the * pending data table mutation pointed by multiMutation. */ private boolean hasAllIndexedColumns(IndexMaintainer indexMaintainer, MultiMutation multiMutation) { Map<byte[], List<Cell>> familyMap = multiMutation.getFamilyCellMap(); for (ColumnReference columnReference : indexMaintainer.getIndexedColumns()) { byte[] family = columnReference.getFamily(); List<Cell> cellList = familyMap.get(family); if (cellList == null) { return false; } boolean has = false; for (Cell cell : cellList) { if (CellUtil.matchingColumn(cell, family, columnReference.getQualifier())) { has = true; break; } } if (!has) { return false; } } return true; }
Example 5
Source File: PhoenixRowTimestampFunction.java From phoenix with Apache License 2.0 | 6 votes |
/** * The evaluate method is called under the following conditions - * 1. When PHOENIX_ROW_TIMESTAMP() is evaluated in the projection list. * Since the EMPTY_COLUMN is not part of the table column list, * emptyColumnKV will be null. * PHOENIX-4179 ensures that the maxTS (which will be EMPTY_COLUMN ts) * is returned for the tuple. * * 2. When PHOENIX_ROW_TIMESTAMP() is evaluated in the backend as part of the where clause. * Here the emptyColumnKV will not be null, since we ensured that by adding it to * scan column list in PhoenixRowTimestampParseNode. * In this case the emptyColumnKV.getTimestamp() is used. */ @Override public boolean evaluate(Tuple tuple, ImmutableBytesWritable ptr) { if (tuple == null || tuple.size() == 0) { return false; } byte[] emptyCF = ((KeyValueColumnExpression)children.get(0)).getColumnFamily(); byte[] emptyCQ = ((KeyValueColumnExpression)children.get(0)).getColumnQualifier(); long ts = tuple.getValue(0).getTimestamp(); Cell emptyColumnKV = tuple.getValue(emptyCF, emptyCQ); if ((emptyColumnKV != null) && CellUtil.matchingColumn(emptyColumnKV, emptyCF, emptyCQ)) { ts = emptyColumnKV.getTimestamp(); } Date rowTimestamp = new Date(ts); ptr.set(PDate.INSTANCE.toBytes(rowTimestamp)); return true; }
Example 6
Source File: Result.java From hbase with Apache License 2.0 | 5 votes |
/** * The Cell for the most recent timestamp for a given column. * * @param family * @param qualifier * * @return the Cell for the column, or null if no value exists in the row or none have been * selected in the query (Get/Scan) */ public Cell getColumnLatestCell(byte [] family, byte [] qualifier) { Cell [] kvs = rawCells(); // side effect possibly. if (kvs == null || kvs.length == 0) { return null; } int pos = binarySearch(kvs, family, qualifier); if (pos == -1) { return null; } if (CellUtil.matchingColumn(kvs[pos], family, qualifier)) { return kvs[pos]; } return null; }
Example 7
Source File: FilterListWithOR.java From hbase with Apache License 2.0 | 5 votes |
/** * For MUST_PASS_ONE, we cannot make sure that when filter-A in filter list return NEXT_COL then * the next cell passing to filterList will be the first cell in next column, because if filter-B * in filter list return SKIP, then the filter list will return SKIP. In this case, we should pass * the cell following the previous cell, and it's possible that the next cell has the same column * as the previous cell even if filter-A has NEXT_COL returned for the previous cell. So we should * save the previous cell and the return code list when checking previous cell for every filter in * filter list, and verify if currentCell fit the previous return code, if fit then pass the * currentCell to the corresponding filter. (HBASE-17678) <br> * Note that: In StoreScanner level, NEXT_ROW will skip to the next row in current family, and in * RegionScanner level, NEXT_ROW will skip to the next row in current family and switch to the * next family for RegionScanner, INCLUDE_AND_NEXT_ROW is the same. so we should pass current cell * to the filter, if row mismatch or row match but column family mismatch. (HBASE-18368) * @see org.apache.hadoop.hbase.filter.Filter.ReturnCode * @param subFilter which sub-filter to calculate the return code by using previous cell and * previous return code. * @param prevCell the previous cell passed to given sub-filter. * @param currentCell the current cell which will pass to given sub-filter. * @param prevCode the previous return code for given sub-filter. * @return return code calculated by using previous cell and previous return code. null means can * not decide which return code should return, so we will pass the currentCell to * subFilter for getting currentCell's return code, and it won't impact the sub-filter's * internal states. */ private ReturnCode calculateReturnCodeByPrevCellAndRC(Filter subFilter, Cell currentCell, Cell prevCell, ReturnCode prevCode) throws IOException { if (prevCell == null || prevCode == null) { return null; } switch (prevCode) { case INCLUDE: case SKIP: return null; case SEEK_NEXT_USING_HINT: Cell nextHintCell = subFilter.getNextCellHint(prevCell); return nextHintCell != null && compareCell(currentCell, nextHintCell) < 0 ? ReturnCode.SEEK_NEXT_USING_HINT : null; case NEXT_COL: case INCLUDE_AND_NEXT_COL: // Once row changed, reset() will clear prevCells, so we need not to compare their rows // because rows are the same here. return CellUtil.matchingColumn(prevCell, currentCell) ? ReturnCode.NEXT_COL : null; case NEXT_ROW: case INCLUDE_AND_SEEK_NEXT_ROW: // As described above, rows are definitely the same, so we only compare the family. return CellUtil.matchingFamily(prevCell, currentCell) ? ReturnCode.NEXT_ROW : null; default: throw new IllegalStateException("Received code is not valid."); } }
Example 8
Source File: ColumnValueFilter.java From hbase with Apache License 2.0 | 5 votes |
@Override public ReturnCode filterCell(Cell c) throws IOException { // 1. Check column match if (!CellUtil.matchingColumn(c, this.family, this.qualifier)) { return columnFound ? ReturnCode.NEXT_ROW : ReturnCode.NEXT_COL; } // Column found columnFound = true; // 2. Check value match: // True means filter out, just skip this cell, else include it. return compareValue(getCompareOperator(), getComparator(), c) ? ReturnCode.SKIP : ReturnCode.INCLUDE; }
Example 9
Source File: SingleColumnValueExcludeFilter.java From hbase with Apache License 2.0 | 5 votes |
@Override public void filterRowCells(List<Cell> kvs) { Iterator<? extends Cell> it = kvs.iterator(); while (it.hasNext()) { // If the current column is actually the tested column, // we will skip it instead. if (CellUtil.matchingColumn(it.next(), this.columnFamily, this.columnQualifier)) { it.remove(); } } }
Example 10
Source File: IndexToolVerificationResult.java From phoenix with Apache License 2.0 | 5 votes |
public void update(Cell cell) { if (CellUtil .matchingColumn(cell, RESULT_TABLE_COLUMN_FAMILY, SCANNED_DATA_ROW_COUNT_BYTES)) { addScannedDataRowCount(getValue(cell)); } else if (CellUtil.matchingColumn(cell, RESULT_TABLE_COLUMN_FAMILY, REBUILT_INDEX_ROW_COUNT_BYTES)) { addRebuiltIndexRowCount(getValue(cell)); } else if (CellUtil.matchingColumn(cell, RESULT_TABLE_COLUMN_FAMILY, BEFORE_REBUILD_VALID_INDEX_ROW_COUNT_BYTES)) { addBeforeRebuildValidIndexRowCount(getValue(cell)); } else if (CellUtil.matchingColumn(cell, RESULT_TABLE_COLUMN_FAMILY, BEFORE_REBUILD_EXPIRED_INDEX_ROW_COUNT_BYTES)) { addBeforeRebuildExpiredIndexRowCount(getValue(cell)); } else if (CellUtil.matchingColumn(cell, RESULT_TABLE_COLUMN_FAMILY, BEFORE_REBUILD_MISSING_INDEX_ROW_COUNT_BYTES)) { addBeforeRebuildMissingIndexRowCount(getValue(cell)); } else if (CellUtil.matchingColumn(cell, RESULT_TABLE_COLUMN_FAMILY, BEFORE_REBUILD_INVALID_INDEX_ROW_COUNT_BYTES)) { addBeforeRebuildInvalidIndexRowCount(getValue(cell)); } else if (CellUtil.matchingColumn(cell, RESULT_TABLE_COLUMN_FAMILY, BEFORE_REBUILD_INVALID_INDEX_ROW_COUNT_COZ_EXTRA_CELLS_BYTES)) { addBeforeIndexHasExtraCellsCount(getValue(cell)); } else if (CellUtil.matchingColumn(cell, RESULT_TABLE_COLUMN_FAMILY, BEFORE_REBUILD_INVALID_INDEX_ROW_COUNT_COZ_MISSING_CELLS_BYTES)) { addBeforeIndexHasMissingCellsCount(getValue(cell)); } else if (CellUtil.matchingColumn(cell, RESULT_TABLE_COLUMN_FAMILY, BEFORE_REBUILD_UNVERIFIED_INDEX_ROW_COUNT_BYTES)) { addBeforeUnverifiedIndexRowCount(getValue(cell)); } else if (CellUtil.matchingColumn(cell, RESULT_TABLE_COLUMN_FAMILY, BEFORE_REBUILD_OLD_INDEX_ROW_COUNT_BYTES)) { addBeforeOldIndexRowCount(getValue(cell)); } else if (CellUtil.matchingColumn(cell, RESULT_TABLE_COLUMN_FAMILY, BEFORE_REBUILD_UNKNOWN_INDEX_ROW_COUNT_BYTES)) { addBeforeUnknownIndexRowCount(getValue(cell)); } else if (CellUtil.matchingColumn(cell, RESULT_TABLE_COLUMN_FAMILY, AFTER_REBUILD_VALID_INDEX_ROW_COUNT_BYTES)) { addAfterRebuildValidIndexRowCount(getValue(cell)); } else if (CellUtil.matchingColumn(cell, RESULT_TABLE_COLUMN_FAMILY, AFTER_REBUILD_EXPIRED_INDEX_ROW_COUNT_BYTES)) { addAfterRebuildExpiredIndexRowCount(getValue(cell)); } else if (CellUtil.matchingColumn(cell, RESULT_TABLE_COLUMN_FAMILY, AFTER_REBUILD_MISSING_INDEX_ROW_COUNT_BYTES)) { addAfterRebuildMissingIndexRowCount(getValue(cell)); } else if (CellUtil.matchingColumn(cell, RESULT_TABLE_COLUMN_FAMILY, AFTER_REBUILD_INVALID_INDEX_ROW_COUNT_BYTES)) { addAfterRebuildInvalidIndexRowCount(getValue(cell)); } else if (CellUtil.matchingColumn(cell, RESULT_TABLE_COLUMN_FAMILY, AFTER_REBUILD_INVALID_INDEX_ROW_COUNT_COZ_EXTRA_CELLS_BYTES)) { addAfterIndexHasExtraCellsCount(getValue(cell)); } else if (CellUtil.matchingColumn(cell, RESULT_TABLE_COLUMN_FAMILY, AFTER_REBUILD_INVALID_INDEX_ROW_COUNT_COZ_MISSING_CELLS_BYTES)) { addAfterIndexHasMissingCellsCount(getValue(cell)); } }
Example 11
Source File: NamespaceTableCfWALEntryFilter.java From hbase with Apache License 2.0 | 4 votes |
@Override public Cell filterCell(final Entry entry, Cell cell) { ReplicationPeerConfig peerConfig = this.peer.getPeerConfig(); if (peerConfig.replicateAllUserTables()) { // replicate all user tables, but filter by exclude table-cfs config final Map<TableName, List<String>> excludeTableCfs = peerConfig.getExcludeTableCFsMap(); if (excludeTableCfs == null) { return cell; } if (CellUtil.matchingColumn(cell, WALEdit.METAFAMILY, WALEdit.BULK_LOAD)) { cell = bulkLoadFilter.filterCell(cell, fam -> filterByExcludeTableCfs(entry.getKey().getTableName(), Bytes.toString(fam), excludeTableCfs)); } else { if (filterByExcludeTableCfs(entry.getKey().getTableName(), Bytes.toString(cell.getFamilyArray(), cell.getFamilyOffset(), cell.getFamilyLength()), excludeTableCfs)) { return null; } } return cell; } else { // not replicate all user tables, so filter by table-cfs config final Map<TableName, List<String>> tableCfs = peerConfig.getTableCFsMap(); if (tableCfs == null) { return cell; } if (CellUtil.matchingColumn(cell, WALEdit.METAFAMILY, WALEdit.BULK_LOAD)) { cell = bulkLoadFilter.filterCell(cell, fam -> filterByTableCfs(entry.getKey().getTableName(), Bytes.toString(fam), tableCfs)); } else { if (filterByTableCfs(entry.getKey().getTableName(), Bytes.toString(cell.getFamilyArray(), cell.getFamilyOffset(), cell.getFamilyLength()), tableCfs)) { return null; } } return cell; } }
Example 12
Source File: WALEdit.java From hbase with Apache License 2.0 | 4 votes |
public static FlushDescriptor getFlushDescriptor(Cell cell) throws IOException { return CellUtil.matchingColumn(cell, METAFAMILY, FLUSH)? FlushDescriptor.parseFrom(CellUtil.cloneValue(cell)): null; }
Example 13
Source File: SingleCellExtractor.java From hbase-indexer with Apache License 2.0 | 4 votes |
@Override public boolean isApplicable(KeyValue keyValue) { return CellUtil.matchingColumn(keyValue, columnFamily, columnQualifier); }
Example 14
Source File: BasePayloadExtractor.java From hbase-indexer with Apache License 2.0 | 3 votes |
/** * Extract the payload data from a KeyValue. * <p> * Data will only be extracted if it matches the configured table, column family, and column qualifiers. If no * payload data can be extracted, null will be returned. * * @param tableName table to which the {@code KeyValue} is being applied * @param keyValue contains a (partial) row mutation which may include payload data * @return the extracted payload data, or null if no payload data is included in the supplied {@code KeyValue} */ @Override public byte[] extractPayload(byte[] tableName, KeyValue keyValue) { if (Bytes.equals(this.tableName, tableName) && CellUtil.matchingColumn(keyValue, columnFamily, columnQualifier)) { return CellUtil.cloneValue(keyValue); } else { return null; } }
Example 15
Source File: WALEdit.java From hbase with Apache License 2.0 | 2 votes |
/** * Returns true if the given cell is a serialized {@link CompactionDescriptor} * * @see #getCompaction(Cell) */ public static boolean isCompactionMarker(Cell cell) { return CellUtil.matchingColumn(cell, METAFAMILY, COMPACTION); }
Example 16
Source File: WALEdit.java From hbase with Apache License 2.0 | 2 votes |
/** * Deserialized and returns a BulkLoadDescriptor from the passed in Cell * @param cell the key value * @return deserialized BulkLoadDescriptor or null. */ public static WALProtos.BulkLoadDescriptor getBulkLoadDescriptor(Cell cell) throws IOException { return CellUtil.matchingColumn(cell, METAFAMILY, BULK_LOAD)? WALProtos.BulkLoadDescriptor.parseFrom(CellUtil.cloneValue(cell)): null; }