Java Code Examples for org.apache.beam.sdk.io.WriteFiles#withSharding()
The following examples show how to use
org.apache.beam.sdk.io.WriteFiles#withSharding() .
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: PTransformMatchersTest.java From beam with Apache License 2.0 | 5 votes |
@Test public void writeWithRunnerDeterminedSharding() { ResourceId outputDirectory = LocalResources.fromString("/foo/bar", true /* isDirectory */); FilenamePolicy policy = DefaultFilenamePolicy.fromStandardParameters( StaticValueProvider.of(outputDirectory), DefaultFilenamePolicy.DEFAULT_UNWINDOWED_SHARD_TEMPLATE, "", false); WriteFiles<Integer, Void, Integer> write = WriteFiles.to( new FileBasedSink<Integer, Void, Integer>( StaticValueProvider.of(outputDirectory), DynamicFileDestinations.constant(policy)) { @Override public WriteOperation<Void, Integer> createWriteOperation() { return null; } }); assertThat( PTransformMatchers.writeWithRunnerDeterminedSharding().matches(appliedWrite(write)), is(true)); WriteFiles<Integer, Void, Integer> withStaticSharding = write.withNumShards(3); assertThat( PTransformMatchers.writeWithRunnerDeterminedSharding() .matches(appliedWrite(withStaticSharding)), is(false)); WriteFiles<Integer, Void, Integer> withCustomSharding = write.withSharding(Sum.integersGlobally().asSingletonView()); assertThat( PTransformMatchers.writeWithRunnerDeterminedSharding() .matches(appliedWrite(withCustomSharding)), is(false)); }
Example 2
Source File: FlinkStreamingPipelineTranslator.java From beam with Apache License 2.0 | 4 votes |
@Override public PTransformReplacement<PCollection<UserT>, WriteFilesResult<DestinationT>> getReplacementTransform( AppliedPTransform< PCollection<UserT>, WriteFilesResult<DestinationT>, WriteFiles<UserT, DestinationT, OutputT>> transform) { // By default, if numShards is not set WriteFiles will produce one file per bundle. In // streaming, there are large numbers of small bundles, resulting in many tiny files. // Instead we pick parallelism * 2 to ensure full parallelism, but prevent too-many files. Integer jobParallelism = options.getParallelism(); Preconditions.checkArgument( jobParallelism > 0, "Parallelism of a job should be greater than 0. Currently set: %s", jobParallelism); int numShards = jobParallelism * 2; try { List<PCollectionView<?>> sideInputs = WriteFilesTranslation.getDynamicDestinationSideInputs(transform); FileBasedSink sink = WriteFilesTranslation.getSink(transform); @SuppressWarnings("unchecked") WriteFiles<UserT, DestinationT, OutputT> replacement = WriteFiles.to(sink).withSideInputs(sideInputs); if (WriteFilesTranslation.isWindowedWrites(transform)) { replacement = replacement.withWindowedWrites(); } if (WriteFilesTranslation.isRunnerDeterminedSharding(transform)) { replacement = replacement.withNumShards(numShards); } else { if (transform.getTransform().getNumShardsProvider() != null) { replacement = replacement.withNumShards(transform.getTransform().getNumShardsProvider()); } if (transform.getTransform().getComputeNumShards() != null) { replacement = replacement.withSharding(transform.getTransform().getComputeNumShards()); } } if (options.isAutoBalanceWriteFilesShardingEnabled()) { replacement = replacement.withShardingFunction( new FlinkAutoBalancedShardKeyShardingFunction<>( jobParallelism, options.getMaxParallelism(), sink.getDynamicDestinations().getDestinationCoder())); } return PTransformReplacement.of( PTransformReplacements.getSingletonMainInput(transform), replacement); } catch (Exception e) { throw new RuntimeException(e); } }