com.hazelcast.jet.pipeline.SourceBuilder Java Examples
The following examples show how to use
com.hazelcast.jet.pipeline.SourceBuilder.
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example #1
Source File: PulsarConsumerBuilder.java From hazelcast-jet-contrib with Apache License 2.0 | 6 votes |
/** * Receive the messages as a batch. The {@link BatchReceivePolicy} is * configured while creating the Pulsar {@link Consumer}. * In this method, emitted items are created by applying the projection function * to the messages received from Pulsar client. If there is an event time * associated with the message, it sets the event time as the timestamp of the * emitted item. Otherwise, it sets the publish time(which always exists) * of the message as the timestamp. */ private void fillBuffer(SourceBuilder.TimestampedSourceBuffer<T> sourceBuffer) throws PulsarClientException { Messages<M> messages = consumer.batchReceive(); for (Message<M> message : messages) { if (message.getEventTime() != 0) { sourceBuffer.add(projectionFn.apply(message), message.getEventTime()); } else { sourceBuffer.add(projectionFn.apply(message), message.getPublishTime()); } } consumer.acknowledgeAsync(messages) .exceptionally(t -> { logger.warning(buildLogMessage(messages)); return null; }); }
Example #2
Source File: MongoDBSourceBuilder.java From hazelcast-jet-contrib with Apache License 2.0 | 6 votes |
void fillBuffer(SourceBuilder.TimestampedSourceBuffer<U> buffer) { if (cursor == null) { if (resumeToken != null) { changeStreamIterable.resumeAfter(resumeToken); } else if (timestamp != null) { changeStreamIterable.startAtOperationTime(timestamp); } cursor = changeStreamIterable.batchSize(BATCH_SIZE).iterator(); } ChangeStreamDocument<? extends T> changeStreamDocument = null; for (int i = 0; i < BATCH_SIZE; i++) { changeStreamDocument = cursor.tryNext(); if (changeStreamDocument == null) { // we've exhausted the stream break; } long clusterTime = clusterTime(changeStreamDocument); U item = mapFn.apply(changeStreamDocument); if (item != null) { buffer.add(item, clusterTime); } } resumeToken = changeStreamDocument == null ? null : changeStreamDocument.getResumeToken(); }
Example #3
Source File: RedisSources.java From hazelcast-jet-contrib with Apache License 2.0 | 6 votes |
void fillBuffer(SourceBuilder.SourceBuffer<ScoredValue<V>> buffer) throws InterruptedException { if (exception != null) { //something went wrong on the Redis client thread throw exception; } for (int i = 0; i < NO_OF_ITEMS_TO_FETCH_AT_ONCE; i++) { ScoredValue<V> item = queue.poll(POLL_DURATION.toMillis(), TimeUnit.MILLISECONDS); if (item == null) { if (commandFuture.isDone()) { buffer.close(); } return; } else { buffer.add(item); } } }
Example #4
Source File: RedisSources.java From hazelcast-jet-contrib with Apache License 2.0 | 6 votes |
void fillBuffer(SourceBuilder.SourceBuffer<T> buffer) throws InterruptedException { if (exception != null) { //something went wrong on the Redis client thread throw exception; } int itemsFetched = queue.drainTo(batchHolder, NO_OF_ITEMS_TO_FETCH_AT_ONCE); if (itemsFetched <= 0) { if (commandFuture.isDone()) { buffer.close(); } } else { batchHolder.stream() .map(mapFn) .forEach(buffer::add); batchHolder.clear(); } }
Example #5
Source File: TwitterSources.java From hazelcast-jet-contrib with Apache License 2.0 | 6 votes |
private void fillTimestampedBuffer(SourceBuilder.TimestampedSourceBuffer<String> sourceBuffer) { queue.drainTo(buffer, MAX_FILL_ELEMENTS); for (String item : buffer) { try { JsonObject object = Json.parse(item).asObject(); String timestampStr = object.getString("timestamp_ms", null); if (timestampStr != null) { long timestamp = Long.parseLong(timestampStr); sourceBuffer.add(item, timestamp); } else { logger.warning("The tweet doesn't contain 'timestamp_ms' field\n" + item); } } catch (Exception e) { logger.warning("Error getting 'timestamp_ms' field from the tweet: " + e + "\n" + item, e); } } buffer.clear(); }
Example #6
Source File: AbstractKafkaConnectSource.java From hazelcast-jet-contrib with Apache License 2.0 | 6 votes |
public void fillBuffer(SourceBuilder.TimestampedSourceBuffer<T> buf) { if (!taskInit) { task.initialize(new JetSourceTaskContext()); task.start(taskConfig); taskInit = true; } try { List<SourceRecord> records = task.poll(); if (records == null) { return; } for (SourceRecord record : records) { boolean added = addToBuffer(record, buf); if (added) { partitionsToOffset.put(record.sourcePartition(), record.sourceOffset()); } } } catch (InterruptedException e) { throw rethrow(e); } }
Example #7
Source File: PulsarReaderBuilder.java From hazelcast-jet-contrib with Apache License 2.0 | 6 votes |
/** * Receive the messages as a batch. * In this method, emitted items are created by applying the projection function * to the messages received from Pulsar client. If there is an event time * associated with the message, it sets the event time as the timestamp of the * emitted item. Otherwise, it sets the publish time(which always exists) * of the message as the timestamp. */ private void fillBuffer(SourceBuilder.TimestampedSourceBuffer<T> sourceBuffer) throws PulsarClientException { if (reader == null) { createReader(); } int count = 0; while (!queue.isEmpty() && count++ < MAX_FILL_MESSAGES) { Message<M> message = queue.poll(); long timestamp; if (message.getEventTime() != 0) { timestamp = message.getEventTime(); } else { timestamp = message.getPublishTime(); } T item = projectionFn.apply(message); offset = message.getMessageId(); if (item != null) { sourceBuffer.add(item, timestamp); } } }
Example #8
Source File: TradeSource.java From hazelcast-jet-training with Apache License 2.0 | 5 votes |
void fillBuffer(SourceBuilder.TimestampedSourceBuffer<Trade> buffer) { ThreadLocalRandom rnd = ThreadLocalRandom.current(); for (int i = 0; i < tradesPerSec; i++) { String ticker = symbols.get(rnd.nextInt(symbols.size())); long tradeTime = System.currentTimeMillis(); Trade trade = new Trade(tradeTime, ticker, QUANTITY, rnd.nextInt(5000)); buffer.add(trade, tradeTime); } LockSupport.parkNanos(TimeUnit.SECONDS.toNanos(1)); // sleep for 1 second }
Example #9
Source File: TradeSource.java From hazelcast-jet-training with Apache License 2.0 | 5 votes |
void fillBuffer(SourceBuilder.TimestampedSourceBuffer<Trade> buffer) { long interval = TimeUnit.SECONDS.toNanos(1) / tradesPerSec; ThreadLocalRandom rnd = ThreadLocalRandom.current(); for (int i = 0; i < MAX_BATCH_SIZE; i++) { if (System.nanoTime() < emitSchedule) { break; } String ticker = tickers.get(rnd.nextInt(tickers.size())); int price = tickerToPrice.compute(ticker, (t, v) -> v + rnd.nextInt(-1, 2)); Trade trade = new Trade(System.currentTimeMillis(), ticker, QUANTITY, price); buffer.add(trade, trade.getTime()); emitSchedule += interval; } }
Example #10
Source File: TradeSource.java From hazelcast-jet-training with Apache License 2.0 | 5 votes |
public static StreamSource<Trade> tradeSource(List<String> tickers, int tradesPerSec) { return SourceBuilder.timestampedStream("trade-source", ctx -> { Map<Integer, List<String>> partitions = partitionTickers(tickers, ctx.totalParallelism()); return new TradeGenerator(partitions.get(ctx.globalProcessorIndex()), tradesPerSec); }) .fillBufferFn(TradeGenerator::fillBuffer) .distributed(1) .build(); }
Example #11
Source File: FlightDataSource.java From hazelcast-jet-demos with Apache License 2.0 | 5 votes |
public static StreamSource<Aircraft> flightDataSource(String url, long pollIntervalMillis) { return SourceBuilder.timestampedStream("Flight Data Source", ctx -> new FlightDataSource(ctx.logger(), url, pollIntervalMillis)) .fillBufferFn(FlightDataSource::fillBuffer) .build(); }
Example #12
Source File: PulsarConsumerBuilder.java From hazelcast-jet-contrib with Apache License 2.0 | 5 votes |
/** * Creates and returns the Pulsar Consumer {@link StreamSource} with using builder configurations set before. */ public StreamSource<T> build() { return SourceBuilder.timestampedStream("pulsar-consumer-source", ctx -> new PulsarConsumerBuilder.ConsumerContext<>( ctx.logger(), connectionSupplier.get(), topics, consumerConfig, schemaSupplier, batchReceivePolicySupplier, projectionFn)) .<T>fillBufferFn(PulsarConsumerBuilder.ConsumerContext::fillBuffer) .destroyFn(PulsarConsumerBuilder.ConsumerContext::destroy) .distributed(2) .build(); }
Example #13
Source File: MongoDBSourceBuilder.java From hazelcast-jet-contrib with Apache License 2.0 | 5 votes |
void fillBuffer(SourceBuilder.SourceBuffer<U> buffer) { for (int i = 0; i < BATCH_SIZE; i++) { if (cursor.hasNext()) { U item = mapFn.apply(cursor.next()); if (item != null) { buffer.add(item); } } else { buffer.close(); } } }
Example #14
Source File: MongoDBSourceBuilder.java From hazelcast-jet-contrib with Apache License 2.0 | 5 votes |
@Nonnull StreamSource<U> build(@Nonnull SupplierEx<StreamContext<T, U>> contextFn) { return SourceBuilder .timestampedStream(name, ctx -> contextFn.get()) .<U>fillBufferFn(StreamContext::fillBuffer) .createSnapshotFn(StreamContext::snapshot) .restoreSnapshotFn(StreamContext::restore) .destroyFn(StreamContext::close) .build(); }
Example #15
Source File: MongoDBSourceBuilder.java From hazelcast-jet-contrib with Apache License 2.0 | 5 votes |
/** * Creates and returns the MongoDB {@link BatchSource}. */ @Nonnull public BatchSource<U> build() { checkNotNull(connectionSupplier, "connectionSupplier must be set"); checkNotNull(databaseFn, "databaseFn must be set"); checkNotNull(collectionFn, "collectionFn must be set"); checkNotNull(searchFn, "searchFn must be set"); checkNotNull(mapFn, "mapFn must be set"); SupplierEx<? extends MongoClient> localConnectionSupplier = connectionSupplier; FunctionEx<? super MongoClient, ? extends MongoDatabase> localDatabaseFn = databaseFn; FunctionEx<? super MongoDatabase, ? extends MongoCollection<? extends T>> localCollectionFn = (FunctionEx<? super MongoDatabase, ? extends MongoCollection<? extends T>>) collectionFn; ConsumerEx<? super MongoClient> localDestroyFn = destroyFn; FunctionEx<? super MongoCollection<? extends T>, ? extends FindIterable<? extends T>> localSearchFn = searchFn; FunctionEx<? super T, U> localMapFn = mapFn; return SourceBuilder .batch(name, ctx -> { MongoClient client = localConnectionSupplier.get(); MongoCollection<? extends T> collection = localCollectionFn.apply(localDatabaseFn.apply(client)); return new BatchContext<>(client, collection, localSearchFn, localMapFn, localDestroyFn); }) .<U>fillBufferFn(BatchContext::fillBuffer) .destroyFn(BatchContext::close) .build(); }
Example #16
Source File: PulsarReaderBuilder.java From hazelcast-jet-contrib with Apache License 2.0 | 5 votes |
/** * Creates and returns the Pulsar Reader {@link StreamSource} with using builder configurations set before. */ public StreamSource<T> build() { return SourceBuilder.timestampedStream("pulsar-reader-source", ctx -> new PulsarReaderBuilder.ReaderContext<>( ctx.logger(), connectionSupplier.get(), topic, readerConfig, schemaSupplier, projectionFn)) .<T>fillBufferFn(PulsarReaderBuilder.ReaderContext::fillBuffer) .createSnapshotFn(PulsarReaderBuilder.ReaderContext::createSnapshot) .restoreSnapshotFn(PulsarReaderBuilder.ReaderContext::restoreSnapshot) .destroyFn(PulsarReaderBuilder.ReaderContext::destroy) .build(); }
Example #17
Source File: TwitterSources.java From hazelcast-jet-contrib with Apache License 2.0 | 5 votes |
private void fillBuffer(SourceBuilder.SourceBuffer<Status> sourceBuffer) throws TwitterException { if (searchResult != null) { List<Status> tweets = searchResult.getTweets(); for (Status tweet : tweets) { sourceBuffer.add(tweet); } searchResult = searchResult.nextQuery() != null ? twitter4JClient.search(searchResult.nextQuery()) : null; } else { sourceBuffer.close(); } }
Example #18
Source File: TwitterSources.java From hazelcast-jet-contrib with Apache License 2.0 | 5 votes |
private void fillBuffer(SourceBuilder.SourceBuffer<String> sourceBuffer) { queue.drainTo(buffer, MAX_FILL_ELEMENTS); for (String item : buffer) { sourceBuffer.add(item); } buffer.clear(); }
Example #19
Source File: TwitterSources.java From hazelcast-jet-contrib with Apache License 2.0 | 5 votes |
/** * Equivalent for {@link #timestampedStream(Properties, SupplierEx)} with * the additional option to specify the Twitter {@code host}. * * @param host a Twitter host URL to connect to. These hosts are defined in * {@link com.twitter.hbc.core.Constants}. */ @Nonnull public static StreamSource<String> timestampedStream( @Nonnull Properties credentials, @Nonnull String host, @Nonnull SupplierEx<? extends StreamingEndpoint> endpointSupplier ) { return SourceBuilder.timestampedStream("twitter-timestamped-stream-source", ctx -> new TwitterStreamSourceContext(ctx.logger(), credentials, host, endpointSupplier)) .fillBufferFn(TwitterStreamSourceContext::fillTimestampedBuffer) .destroyFn(TwitterStreamSourceContext::close) .build(); }
Example #20
Source File: TwitterSources.java From hazelcast-jet-contrib with Apache License 2.0 | 5 votes |
/** * Equivalent of {@link #stream(Properties, SupplierEx)} with the * additional option to specify the Twitter {@code host}. * * @param host a Twitter host URL to connect to. These hosts are defined in * {@link com.twitter.hbc.core.Constants}. */ @Nonnull public static StreamSource<String> stream( @Nonnull Properties credentials, @Nonnull String host, @Nonnull SupplierEx<? extends StreamingEndpoint> endpointSupplier ) { return SourceBuilder.stream("twitter-stream-source", ctx -> new TwitterStreamSourceContext(ctx.logger(), credentials, host, endpointSupplier)) .fillBufferFn(TwitterStreamSourceContext::fillBuffer) .destroyFn(TwitterStreamSourceContext::close) .build(); }
Example #21
Source File: RedisSources.java From hazelcast-jet-contrib with Apache License 2.0 | 4 votes |
/** * Creates a {@link BatchSource} which queries a Redis Sorted Set for a * range of elements having their scores between the two limit values * provided (from & to). The returned elements will be emitted as they * arrive, the batch ends when all elements have been received. * <p> * Here's an example which reads a range from a Sorted Set, maps the items * to strings and drains them to some sink. * <pre>{@code * RedisURI uri = RedisURI.create("redis://localhost/"); * Pipeline.create() * .readFrom(RedisSources.sortedSet("source", uri, "sortedSet", * StringCodec::new, 10d, 90d)) * .map(sv -> (int) sv.getScore() + ":" + sv.getValue()) * .writeTo(sink); * }</pre> * * @param name name of the source being created * @param uri URI of the Redis server * @param codecFn supplier of {@link RedisCodec} instances, used in turn for * serializing/deserializing keys and values * @param key identifier of the Redis Sorted Set * @param from start of the score range we are interested in (INCLUSIVE) * @param to end of the score range we are interested in (INCLUSIVE) * @param <K> type of the sorted set identifier * @param <V> type of the values stored in the sorted set * @return source to use in {@link com.hazelcast.jet.pipeline.Pipeline#readFrom} */ @Nonnull public static <K, V> BatchSource<ScoredValue<V>> sortedSet( @Nonnull String name, @Nonnull RedisURI uri, @Nonnull SupplierEx<RedisCodec<K, V>> codecFn, @Nonnull K key, long from, long to ) { Objects.requireNonNull(name, "name"); Objects.requireNonNull(uri, "uri"); Objects.requireNonNull(codecFn, "codecFn"); Objects.requireNonNull(key, "key"); return SourceBuilder.batch(name, ctx -> new SortedSetContext<>(uri, codecFn.get(), key, from, to)) .<ScoredValue<V>>fillBufferFn(SortedSetContext::fillBuffer) .destroyFn(SortedSetContext::close) .build(); }
Example #22
Source File: InfluxDbSources.java From hazelcast-jet-contrib with Apache License 2.0 | 4 votes |
/** * Creates a source that connects to InfluxDB database using the given * connection supplier and emits items mapped with the given mapper * function. * * Example pipeline which reads records from InfluxDb, maps the first two * columns to a tuple and logs them can be seen below: <pre>{@code * Pipeline p = Pipeline.create(); * p.readFrom( * InfluxDbSources.influxDb("SELECT * FROM db..cpu_usages", * () -> InfluxDBFactory.connect(url, username, password).setDatabase(database) * (name, tags, columns, row) -> tuple2(row.get(0), row.get(1)))) * ) * .writeTo(Sinks.logger()); * }</pre> * * @param <T> type of the user object * @param query query to execute on InfluxDb database * @param connectionSupplier supplier which returns {@link InfluxDB} instance * @param measurementProjection mapper function which takes measurement name, tags set, column names and values * as argument and produces the user object {@link T} which will be emitted from * this source * @return a source to use in {@link com.hazelcast.jet.pipeline.Pipeline#readFrom} */ @Nonnull public static <T> BatchSource<T> influxDb( @Nonnull String query, @Nonnull SupplierEx<InfluxDB> connectionSupplier, @Nonnull MeasurementProjection<T> measurementProjection ) { checkNotNull(query, "query cannot be null"); checkNotNull(connectionSupplier, "connectionSupplier cannot be null"); checkNotNull(measurementProjection, "connectionSupplier cannot be null"); return SourceBuilder.batch("influxdb", ignored -> new InfluxDbSourceContext<>(query, connectionSupplier, null, measurementProjection)) .<T>fillBufferFn(InfluxDbSourceContext::fillBufferWithMeasurementMapping) .destroyFn(InfluxDbSourceContext::close) .build(); }
Example #23
Source File: KafkaConnectSources.java From hazelcast-jet-contrib with Apache License 2.0 | 4 votes |
@Override protected boolean addToBuffer(SourceRecord record, SourceBuilder.TimestampedSourceBuffer<SourceRecord> buf) { long ts = record.timestamp() == null ? 0 : record.timestamp(); buf.add(record, ts); return true; }
Example #24
Source File: WebcamSource.java From hazelcast-jet-demos with Apache License 2.0 | 4 votes |
public static StreamSource<BufferedImage> webcam(long pollIntervalMillis) { return SourceBuilder.timestampedStream("webcam", ctx -> new WebcamSource(pollIntervalMillis)) .fillBufferFn(WebcamSource::addToBufferFn) .destroyFn(WebcamSource::close) .build(); }
Example #25
Source File: TradeSource.java From hazelcast-jet-training with Apache License 2.0 | 4 votes |
public static StreamSource<Trade> tradeSource(int tradesPerSec) { return SourceBuilder.timestampedStream("trade-source", x -> new TradeGenerator(SYMBOLS, tradesPerSec)) .fillBufferFn(TradeGenerator::fillBuffer) .build(); }
Example #26
Source File: InfluxDbSources.java From hazelcast-jet-contrib with Apache License 2.0 | 4 votes |
/** * Creates a source that connects to InfluxDB database using the given * connection supplier and emits items which are mapped to the provided * POJO class type. * * Example pipeline which reads records from InfluxDb, maps them * to the provided POJO and logs them can be seen below: <pre>{@code * Pipeline p = Pipeline.create(); * p.readFrom( * InfluxDbSources.influxDb("SELECT * FROM db..cpu", * () -> InfluxDBFactory.connect(url, username, password).setDatabase(database) * Cpu.class) * ) * .writeTo(Sinks.logger()); * }</pre> * * @param query query to execute on InfluxDb database * @param connectionSupplier supplier which returns {@link InfluxDB} instance * @param pojoClass the POJO class instance * @param <T> the POJO class * @return a source to use in {@link com.hazelcast.jet.pipeline.Pipeline#readFrom} */ @Nonnull public static <T> BatchSource<T> influxDb( @Nonnull String query, @Nonnull SupplierEx<InfluxDB> connectionSupplier, @Nonnull Class<T> pojoClass ) { checkNotNull(query, "query cannot be null"); checkNotNull(connectionSupplier, "username cannot be null"); checkNotNull(pojoClass, "pojoClass cannot be null"); return SourceBuilder.batch("influxdb", ignored -> new InfluxDbSourceContext<>(query, connectionSupplier, pojoClass, null)) .<T>fillBufferFn(InfluxDbSourceContext::fillBufferWithPojoMapping) .destroyFn(InfluxDbSourceContext::close) .build(); }
Example #27
Source File: TwitterSources.java From hazelcast-jet-contrib with Apache License 2.0 | 3 votes |
/** * Creates a {@link BatchSource} which emits tweets in the form of {@link * Status} by using Twitter's Search API for data ingestion. Twitter * restricts the repeated (continuous) access to its search endpoint so you * can only make 180 calls every 15 minutes. This source tries to get the * search results from the search endpoint until the API rate limit is * exhausted. * <p> * Example usage: * * <pre>{@code * Properties credentials = loadTwitterCredentials(); * BatchSource<Status> twitterSearchSource = * TwitterSources.search(credentials,"Jet flies"); * Pipeline p = Pipeline.create(); * BatchStage<Status> srcStage = p.readFrom(twitterSearchSource); * }</pre> * * @param credentials a Twitter OAuth1 credentials that consists of * "consumerKey", "consumerSecret", "token", * "tokenSecret" keys. * @param query a search query * @return a batch source to use in {@link Pipeline#readFrom} * * @see <a href="https://developer.twitter.com/en/docs/basics/rate-limiting">Twitter's Rate Limiting.</a> * @see <a href="https://developer.twitter.com/en/docs/tweets/search/api-reference/get-search-tweets"> * Search tweets/ Twitter Developers</a> * @see <a href="https://developer.twitter.com/en/docs/tweets/search/guides/standard-operators"> * Twitter API / Standard search Operators</a> */ @Nonnull public static BatchSource<Status> search( @Nonnull Properties credentials, @Nonnull String query ) { return SourceBuilder.batch("twitter-search-batch-source", ctx -> new TwitterBatchSourceContext(credentials, query)) .fillBufferFn(TwitterBatchSourceContext::fillBuffer) .build(); }
Example #28
Source File: KafkaConnectSources.java From hazelcast-jet-contrib with Apache License 2.0 | 3 votes |
/** * A generic Kafka Connect source provides ability to plug any Kafka * Connect source for data ingestion to Jet pipelines. * <p> * You need to add the Kafka Connect connector JARs or a ZIP file * contains the JARs as a job resource via {@link com.hazelcast.jet.config.JobConfig#addJar(URL)} * or {@link com.hazelcast.jet.config.JobConfig#addJarsInZip(URL)} * respectively. * <p> * After that you can use the Kafka Connect connector with the * configuration parameters as you'd using it with Kafka. Hazelcast * Jet will drive the Kafka Connect connector from the pipeline and * the records will be available to your pipeline as {@link SourceRecord}s. * <p> * In case of a failure; this source keeps track of the source * partition offsets, it will restore the partition offsets and * resume the consumption from where it left off. * <p> * Hazelcast Jet will instantiate a single task for the specified * source in the cluster. * * @param properties Kafka connect properties * @return a source to use in {@link com.hazelcast.jet.pipeline.Pipeline#readFrom(StreamSource)} */ public static StreamSource<SourceRecord> connect(Properties properties) { String name = properties.getProperty("name"); return SourceBuilder.timestampedStream(name, ctx -> new KafkaConnectSource(properties)) .fillBufferFn(KafkaConnectSource::fillBuffer) .createSnapshotFn(KafkaConnectSource::createSnapshot) .restoreSnapshotFn(KafkaConnectSource::restoreSnapshot) .destroyFn(KafkaConnectSource::destroy) .build(); }
Example #29
Source File: AbstractKafkaConnectSource.java From hazelcast-jet-contrib with Apache License 2.0 | votes |
protected abstract boolean addToBuffer(SourceRecord record, SourceBuilder.TimestampedSourceBuffer<T> buf);