org.apache.hadoop.hbase.client.HTablePool Java Examples
The following examples show how to use
org.apache.hadoop.hbase.client.HTablePool.
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example #1
Source File: SpecificAvroDao.java From kite with Apache License 2.0 | 6 votes |
/** * Create a CompositeDao, which will return SpecificRecord instances * in a Map container. * * @param tablePool * An HTablePool instance to use for connecting to HBase * @param tableName * The table name this dao will read from and write to * @param keySchemaString * The Avro schema string that represents the StorageKey structure for row * keys in this table. * @param subEntitySchemaStrings * The list of entities that make up the composite. * @param keyClass * The class of the SpecificRecord representing the StorageKey of rows this * dao will fetch. * @return The CompositeDao instance. * @throws SchemaNotFoundException * @throws ValidationException */ @SuppressWarnings("unchecked") public static <K extends SpecificRecord, S extends SpecificRecord> Dao< Map<String, S>> buildCompositeDao( HTablePool tablePool, String tableName, List<String> subEntitySchemaStrings) { List<EntityMapper<S>> entityMappers = new ArrayList<EntityMapper<S>>(); for (String subEntitySchemaString : subEntitySchemaStrings) { AvroEntitySchema subEntitySchema = parser .parseEntitySchema(subEntitySchemaString); Class<S> subEntityClass; try { subEntityClass = (Class<S>) Class.forName(subEntitySchema .getAvroSchema().getFullName()); } catch (ClassNotFoundException e) { throw new RuntimeException(e); } entityMappers.add(SpecificAvroDao.<S> buildEntityMapper( subEntitySchemaString, subEntitySchemaString, subEntityClass)); } return new SpecificMapCompositeAvroDao<S>(tablePool, tableName, entityMappers); }
Example #2
Source File: HbFactory.java From tddl5 with Apache License 2.0 | 6 votes |
private void initConfiguration() { if (clusterConfig.get(HbaseConf.cluster_name) == null || "".equals(clusterConfig.get(HbaseConf.cluster_name))) { throw new IllegalArgumentException("cluster name can not be null or ''!"); } clusterName = clusterConfig.get(HbaseConf.cluster_name); Configuration conf = HBaseConfiguration.create(); conf.set(HbaseConf.hbase_quorum, clusterConfig.get(HbaseConf.hbase_quorum)); conf.set(HbaseConf.hbase_clientPort, clusterConfig.get(HbaseConf.hbase_clientPort)); if (null != clusterConfig.get(HbaseConf.hbase_znode_parent)) { conf.set(HbaseConf.hbase_znode_parent, clusterConfig.get(HbaseConf.hbase_znode_parent)); } conf.set("hbase.client.retries.number", "5"); conf.set("hbase.client.pause", "200"); conf.set("ipc.ping.interval", "3000"); conf.setBoolean("hbase.ipc.client.tcpnodelay", true); if (this.checkConfiguration(clusterConfig.get(HbaseConf.cluster_name), conf)) { conficuration = conf; tablePool = new HTablePool(conf, 100); } }
Example #3
Source File: ServerUtil.java From phoenix with Apache License 2.0 | 6 votes |
private static HTableInterface getTableFromSingletonPool(RegionCoprocessorEnvironment env, byte[] tableName) throws IOException { // It's ok to not ever do a pool.close() as we're storing a single // table only. The HTablePool holds no other resources that this table // which will be closed itself when it's no longer needed. @SuppressWarnings("resource") HTablePool pool = new HTablePool(env.getConfiguration(),1); try { return pool.getTable(tableName); } catch (RuntimeException t) { // handle cases that an IOE is wrapped inside a RuntimeException like HTableInterface#createHTableInterface if(t.getCause() instanceof IOException) { throw (IOException)t.getCause(); } else { throw t; } } }
Example #4
Source File: SpecificAvroDao.java From kite with Apache License 2.0 | 6 votes |
/** * Create a CompositeDao, which will return SpecificRecord instances * in a Map container. * * @param tablePool * An HTablePool instance to use for connecting to HBase. * @param tableName * The table name of the managed schema. * @param subEntityClasses * The classes that make up the subentities. * @param schemaManager * The SchemaManager which will use to create the entity mapper that * will power this dao. * @return The CompositeDao instance. * @throws SchemaNotFoundException */ public static <K extends SpecificRecord, S extends SpecificRecord> Dao<Map<String, S>> buildCompositeDaoWithEntityManager( HTablePool tablePool, String tableName, List<Class<S>> subEntityClasses, SchemaManager schemaManager) { List<EntityMapper<S>> entityMappers = new ArrayList<EntityMapper<S>>(); for (Class<S> subEntityClass : subEntityClasses) { String entityName = getSchemaFromEntityClass(subEntityClass).getName(); entityMappers.add(new VersionedAvroEntityMapper.Builder() .setSchemaManager(schemaManager).setTableName(tableName) .setEntityName(entityName).setSpecific(true) .<S> build()); } return new SpecificMapCompositeAvroDao<S>(tablePool, tableName, entityMappers); }
Example #5
Source File: SpecificAvroDao.java From kite with Apache License 2.0 | 6 votes |
public SpecificCompositeAvroDao(HTablePool tablePool, String tableName, List<EntityMapper<S>> entityMappers, Class<E> entityClass) { super(tablePool, tableName, entityMappers); this.entityClass = entityClass; try { entityConstructor = entityClass.getConstructor(); entitySchema = (Schema) entityClass.getDeclaredField("SCHEMA$").get( null); } catch (Throwable e) { LOG.error( "Error getting constructor or schema field for entity of type: " + entityClass.getName(), e); throw new DatasetException(e); } }
Example #6
Source File: BaseEntityBatch.java From kite with Apache License 2.0 | 6 votes |
/** * Checks an HTable out of the HTablePool and modifies it to take advantage of * batch puts. This is very useful when performing many consecutive puts. * * @param clientTemplate * The client template to use * @param entityMapper * The EntityMapper to use for mapping * @param pool * The HBase table pool * @param tableName * The name of the HBase table * @param writeBufferSize * The batch buffer size in bytes. */ public BaseEntityBatch(HBaseClientTemplate clientTemplate, EntityMapper<E> entityMapper, HTablePool pool, String tableName, long writeBufferSize) { this.table = pool.getTable(tableName); this.table.setAutoFlush(false); this.clientTemplate = clientTemplate; this.entityMapper = entityMapper; this.state = ReaderWriterState.NEW; /** * If the writeBufferSize is less than the currentBufferSize, then the * buffer will get flushed automatically by HBase. This should never happen, * since we're getting a fresh table out of the pool, and the writeBuffer * should be empty. */ try { table.setWriteBufferSize(writeBufferSize); } catch (IOException e) { throw new DatasetIOException("Error flushing commits for table [" + table + "]", e); } }
Example #7
Source File: UserProfileExample.java From kite with Apache License 2.0 | 6 votes |
/** * The constructor will start by registering the schemas with the meta store * table in HBase, and create the required tables to run. */ public UserProfileExample() throws InterruptedException { Configuration conf = HBaseConfiguration.create(); HTablePool pool = new HTablePool(conf, 10); SchemaManager schemaManager = new DefaultSchemaManager(pool); registerSchemas(conf, schemaManager); userProfileDao = new SpecificAvroDao<UserProfileModel>(pool, "kite_example_user_profiles", "UserProfileModel", schemaManager); userActionsDao = new SpecificAvroDao<UserActionsModel>(pool, "kite_example_user_profiles", "UserActionsModel", schemaManager); userProfileActionsDao = SpecificAvroDao.buildCompositeDaoWithEntityManager( pool, "kite_example_user_profiles", UserProfileActionsModel.class, schemaManager); }
Example #8
Source File: SchemaToolTest.java From kite with Apache License 2.0 | 5 votes |
@Before public void before() throws Exception { tableName = UUID.randomUUID().toString().substring(0, 8); tablePool = new HTablePool(HBaseTestUtils.getConf(), 10); manager = new DefaultSchemaManager(tablePool); tool = new SchemaTool(new HBaseAdmin(HBaseTestUtils.getConf()), manager); }
Example #9
Source File: SpecificAvroDao.java From kite with Apache License 2.0 | 5 votes |
public SpecificMapCompositeAvroDao(HTablePool tablePool, String tableName, List<EntityMapper<S>> entityMappers) { super(tablePool, tableName, entityMappers); subEntitySchemas = Lists.newArrayList(); for (EntityMapper<S> entityMapper : entityMappers) { subEntitySchemas.add(parser.parseEntitySchema(entityMapper.getEntitySchema().getRawSchema()).getAvroSchema()); } }
Example #10
Source File: HBaseMetadataProviderTest.java From kite with Apache License 2.0 | 5 votes |
@BeforeClass public static void beforeClass() throws Exception { HTablePool tablePool = HBaseTestUtils.startHBaseAndGetPool(); // managed table should be created by HBaseDatasetRepository HBaseTestUtils.util.deleteTable(Bytes.toBytes(managedTableName)); SchemaManager schemaManager = new DefaultSchemaManager(tablePool); HBaseAdmin admin = new HBaseAdmin(HBaseTestUtils.getConf()); provider = new HBaseMetadataProvider(admin, schemaManager); }
Example #11
Source File: ManagedDaoTest.java From kite with Apache License 2.0 | 5 votes |
@Before public void before() throws Exception { tablePool = new HTablePool(HBaseTestUtils.getConf(), 10); SchemaTool tool = new SchemaTool(new HBaseAdmin(HBaseTestUtils.getConf()), new DefaultSchemaManager(tablePool)); tool.createOrMigrateSchema(tableName, testRecord, true); tool.createOrMigrateSchema(tableName, testRecordv2, true); tool.createOrMigrateSchema(compositeTableName, compositeSubrecord1, true); tool.createOrMigrateSchema(compositeTableName, compositeSubrecord2, true); tool.createOrMigrateSchema(incrementTableName, testIncrement, true); manager = new DefaultSchemaManager(tablePool); }
Example #12
Source File: HBaseTestUtils.java From kite with Apache License 2.0 | 5 votes |
public static SchemaManager initializeSchemaManager( HTablePool tablePool, String directory) throws Exception { SchemaManager entityManager = new DefaultSchemaManager( tablePool); SchemaTool schemaTool = new SchemaTool(new HBaseAdmin(getConf()), entityManager); schemaTool.createOrMigrateSchemaDirectory(directory, true); return entityManager; }
Example #13
Source File: IndexedRegion.java From hbase-secondary-index with GNU General Public License v3.0 | 5 votes |
@SuppressWarnings("deprecation") public IndexedRegion(final Path basedir, final HLog log, final FileSystem fs, final Configuration conf, final HRegionInfo regionInfo, final FlushRequester flushListener) throws IOException { super(basedir, log, fs, conf, regionInfo, flushListener); this.indexTableDescriptor = new IndexedTableDescriptor( regionInfo.getTableDesc()); this.conf = conf; this.tablePool = new HTablePool(); }
Example #14
Source File: SpecificAvroDao.java From kite with Apache License 2.0 | 4 votes |
/** * Create a CompositeDao, which will return SpecificRecord instances * represented by the entitySchemaString avro schema. This avro schema must be * a composition of the schemas in the subEntitySchemaStrings list. * * @param tablePool * An HTabePool instance to use for connecting to HBase. * @param tableName * The table name of the managed schema. * @param entityClass * The class that is the composite record, which is made up of fields * referencing the sub records. * @param schemaManager * The SchemaManager which will use to create the entity mapper that * will power this dao. * @return The CompositeDao instance. * @throws SchemaNotFoundException */ public static <K extends SpecificRecord, E extends SpecificRecord, S extends SpecificRecord> Dao<E> buildCompositeDaoWithEntityManager( HTablePool tablePool, String tableName, Class<E> entityClass, SchemaManager schemaManager) { Schema entitySchema = getSchemaFromEntityClass(entityClass); List<EntityMapper<S>> entityMappers = new ArrayList<EntityMapper<S>>(); for (Schema.Field field : entitySchema.getFields()) { entityMappers.add(new VersionedAvroEntityMapper.Builder() .setSchemaManager(schemaManager).setTableName(tableName) .setEntityName(getSchemaName(field.schema())).setSpecific(true) .<S> build()); } return new SpecificCompositeAvroDao<E, S>(tablePool, tableName, entityMappers, entityClass); }
Example #15
Source File: BaseEntityScanner.java From kite with Apache License 2.0 | 4 votes |
public Builder(HTablePool tablePool, String tableName, EntityMapper<E> entityMapper) { super(tablePool, tableName, entityMapper); }
Example #16
Source File: CompositeDaoTest.java From kite with Apache License 2.0 | 4 votes |
@Before public void beforeTest() throws Exception { tablePool = new HTablePool(HBaseTestUtils.getConf(), 10); }
Example #17
Source File: AvroDaoTest.java From kite with Apache License 2.0 | 4 votes |
@Before public void beforeTest() throws Exception { HBaseTestUtils.util.truncateTable(Bytes.toBytes(tableName)); tablePool = new HTablePool(HBaseTestUtils.getConf(), 10); }
Example #18
Source File: HBaseTestUtils.java From kite with Apache License 2.0 | 4 votes |
public static HTablePool startHBaseAndGetPool() throws Exception { getMiniCluster(); return new HTablePool(getConf(), 10); }
Example #19
Source File: EagleConfigFactory.java From Eagle with Apache License 2.0 | 4 votes |
private EagleConfigFactory(){ init(); this.pool = new HTablePool(this.hbaseConf, 10); }
Example #20
Source File: EagleConfigFactory.java From eagle with Apache License 2.0 | 4 votes |
private EagleConfigFactory() { init(); if (this.getStorageType() == null || this.getStorageType().equalsIgnoreCase("hbase")) { this.pool = new HTablePool(this.hbaseConf, 10); } }
Example #21
Source File: AbstractHBaseClient.java From jstorm with Apache License 2.0 | 4 votes |
public void initFromStormConf(Map stormConf) { logger.info("init hbase client."); Configuration conf = makeConf(stormConf); hTablePool = new HTablePool(conf, TABLE_POOL_SIZE); logger.info("finished init hbase client."); }
Example #22
Source File: HBaseDatasetRepository.java From kite with Apache License 2.0 | 4 votes |
HBaseDatasetRepository(HBaseAdmin hBaseAdmin, HTablePool tablePool, URI repositoryUri) { this.tablePool = tablePool; this.schemaManager = new DefaultSchemaManager(tablePool); this.metadataProvider = new HBaseMetadataProvider(hBaseAdmin, schemaManager); this.repositoryUri = repositoryUri; }
Example #23
Source File: ManagedSchemaHBaseDao.java From kite with Apache License 2.0 | 4 votes |
public ManagedSchemaHBaseDao(HTablePool tablePool, String managedSchemaTable) { managedSchemaDao = new SpecificAvroDao<ManagedSchema>(tablePool, managedSchemaTable, managedSchemaEntity.getRawSchema(), ManagedSchema.class); }
Example #24
Source File: DefaultSchemaManager.java From kite with Apache License 2.0 | 4 votes |
public DefaultSchemaManager(HTablePool tablePool, String managedSchemaTable) { this(new ManagedSchemaHBaseDao(tablePool, managedSchemaTable)); }
Example #25
Source File: SpecificAvroDao.java From kite with Apache License 2.0 | 4 votes |
/** * Create a CompositeDao, which will return SpecificRecord instances * represented by the entitySchemaString avro schema. This avro schema must be * a composition of the schemas in the subEntitySchemaStrings list. * * @param tablePool * An HTablePool instance to use for connecting to HBase * @param tableName * The table name this dao will read from and write to * @param keySchemaString * The Avro schema string that represents the StorageKey structure for row * keys in this table. * @param subEntitySchemaStrings * The list of entities that make up the composite. This list must be * in the same order as the fields defined in the entitySchemaString. * @param keyClass * The class of the SpecificRecord representing the StorageKey of rows this * dao will fetch. * @param entityClass * The class of the SpecificRecord this DAO will persist and fetch. * @return The CompositeDao instance. * @throws SchemaNotFoundException * @throws ValidationException */ @SuppressWarnings("unchecked") public static <E extends SpecificRecord, S extends SpecificRecord> Dao<E> buildCompositeDao( HTablePool tablePool, String tableName, List<String> subEntitySchemaStrings, Class<E> entityClass) { List<EntityMapper<S>> entityMappers = new ArrayList<EntityMapper<S>>(); for (String subEntitySchemaString : subEntitySchemaStrings) { AvroEntitySchema subEntitySchema = parser .parseEntitySchema(subEntitySchemaString); Class<S> subEntityClass; try { subEntityClass = (Class<S>) Class.forName(subEntitySchema .getAvroSchema().getFullName()); } catch (ClassNotFoundException e) { throw new RuntimeException(e); } entityMappers.add(SpecificAvroDao.<S> buildEntityMapper( subEntitySchemaString, subEntitySchemaString, subEntityClass)); } return new SpecificCompositeAvroDao<E, S>(tablePool, tableName, entityMappers, entityClass); }
Example #26
Source File: BaseEntityScanner.java From kite with Apache License 2.0 | 3 votes |
/** * @param scan * The Scan object that will be used * @param tablePool * The HTablePool instance to get a table to open a scanner on. * @param tableName * The table name to perform the scan on. * @param entityMapper * The EntityMapper to map rows to entities.. */ public BaseEntityScanner(Scan scan, HTablePool tablePool, String tableName, EntityMapper<E> entityMapper) { this.scan = scan; this.entityMapper = entityMapper; this.tablePool = tablePool; this.tableName = tableName; this.state = ReaderWriterState.NEW; }
Example #27
Source File: BaseDao.java From kite with Apache License 2.0 | 3 votes |
/** * Constructor that will internally create an HBaseClientTemplate from the * tablePool and the tableName. * * @param transactionManager * The TransactionManager that will manage transactional entities. * @param tablePool * A pool of HBase Tables. * @param tableName * The name of the table this dao persists to and fetches from. * @param entityMapper * Maps between entities and the HBase operations. */ public BaseDao(HTablePool tablePool, String tableName, EntityMapper<E> entityMapper) { this.tableName = tableName; this.entityMapper = entityMapper; this.clientTemplate = new HBaseClientTemplate(tablePool, tableName); }
Example #28
Source File: GenericAvroDao.java From kite with Apache License 2.0 | 3 votes |
/** * Construct a GenericAvroDao. * * @param tablePool * An HTablePool instance to use for connecting to HBase. * @param tableName * The name of the table this Dao will read from and write to in * HBase. * @param keySchemaStr * The Avro schema that represents the StorageKey structure for row keys in * this table. * @param entitySchemaStream * The InputStream that contains a json string representing the * special avro record schema, that contains metadata in annotations * of the Avro record fields. See {@link AvroEntityMapper} for * details. */ public GenericAvroDao(HTablePool tablePool, String tableName, InputStream entitySchemaStream) { super(tablePool, tableName, buildEntityMapper(AvroUtils .inputStreamToString(entitySchemaStream))); }
Example #29
Source File: EntityScannerBuilder.java From kite with Apache License 2.0 | 3 votes |
/** * This is an abstract Builder object for the Entity Scanners, which will * allow users to dynamically construct a scanner object using the Builder * pattern. This is useful when the user doesn't have all the up front * information to create a scanner. It's also easier to add more options later * to the scanner, this will be the preferred method for users to create * scanners. */ public EntityScannerBuilder(HTablePool tablePool, String tableName, EntityMapper<E> entityMapper) { this.tablePool = tablePool; this.tableName = tableName; this.entityMapper = entityMapper; }
Example #30
Source File: BaseEntityBatch.java From kite with Apache License 2.0 | 3 votes |
/** * Checks an HTable out of the HTablePool and modifies it to take advantage of * batch puts using the default writeBufferSize (2MB). This is very useful * when performing many consecutive puts. * * @param clientTemplate * The client template to use * @param entityMapper * The EntityMapper to use for mapping * @param pool * The HBase table pool * @param tableName * The name of the HBase table */ public BaseEntityBatch(HBaseClientTemplate clientTemplate, EntityMapper<E> entityMapper, HTablePool pool, String tableName) { this.table = pool.getTable(tableName); this.table.setAutoFlush(false); this.clientTemplate = clientTemplate; this.entityMapper = entityMapper; this.state = ReaderWriterState.NEW; }