com.google.cloud.bigquery.TableDataWriteChannel Java Examples
The following examples show how to use
com.google.cloud.bigquery.TableDataWriteChannel.
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example #1
Source File: BigQueryTemplate.java From spring-cloud-gcp with Apache License 2.0 | 5 votes |
@Override public ListenableFuture<Job> writeDataToTable( String tableName, InputStream inputStream, FormatOptions dataFormatOptions) { TableId tableId = TableId.of(datasetName, tableName); WriteChannelConfiguration writeChannelConfiguration = WriteChannelConfiguration .newBuilder(tableId) .setFormatOptions(dataFormatOptions) .setAutodetect(this.autoDetectSchema) .setWriteDisposition(this.writeDisposition) .build(); TableDataWriteChannel writer = bigQuery.writer(writeChannelConfiguration); try (OutputStream sink = Channels.newOutputStream(writer)) { // Write data from data input file to BigQuery StreamUtils.copy(inputStream, sink); } catch (IOException e) { throw new BigQueryException("Failed to write data to BigQuery tables.", e); } if (writer.getJob() == null) { throw new BigQueryException( "Failed to initialize the BigQuery write job."); } return createJobFuture(writer.getJob()); }
Example #2
Source File: BigQuerySnippets.java From google-cloud-java with Apache License 2.0 | 5 votes |
/** Example of writing a local file to a table. */ // [TARGET writer(WriteChannelConfiguration)] // [VARIABLE "my_dataset_name"] // [VARIABLE "my_table_name"] // [VARIABLE FileSystems.getDefault().getPath(".", "my-data.csv")] // [VARIABLE "us"] public long writeFileToTable(String datasetName, String tableName, Path csvPath, String location) throws IOException, InterruptedException, TimeoutException { // [START bigquery_load_from_file] TableId tableId = TableId.of(datasetName, tableName); WriteChannelConfiguration writeChannelConfiguration = WriteChannelConfiguration.newBuilder(tableId).setFormatOptions(FormatOptions.csv()).build(); // Generally, location can be inferred based on the location of the referenced dataset. // However, // it can also be set explicitly to force job execution to be routed to a specific processing // location. See https://cloud.google.com/bigquery/docs/locations for more info. JobId jobId = JobId.newBuilder().setLocation(location).build(); TableDataWriteChannel writer = bigquery.writer(jobId, writeChannelConfiguration); // Write data to writer try (OutputStream stream = Channels.newOutputStream(writer)) { Files.copy(csvPath, stream); } finally { writer.close(); } // Get load job Job job = writer.getJob(); job = job.waitFor(); LoadStatistics stats = job.getStatistics(); return stats.getOutputRows(); // [END bigquery_load_from_file] }