com.twitter.util.Function Java Examples
The following examples show how to use
com.twitter.util.Function.
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example #1
Source File: ConfigFileWatcher.java From singer with Apache License 2.0 | 6 votes |
/** * Adds a watch on the specified file. The file must exist, otherwise a FileNotFoundException * is returned. If the file is deleted after a watch is established, the watcher will log errors * but continue to monitor it, and resume watching if it is recreated. * * @param filePath path to the file to watch. * @param onUpdate function to call when a change is detected to the file. The entire contents * of the file will be passed in to the function. Note that onUpdate will be * called once before this call completes, which facilities initial load of data. * This callback is executed synchronously on the watcher thread - it is * important that the function be non-blocking. */ public synchronized void addWatch(String filePath, Function<byte[], Void> onUpdate) throws IOException { MorePreconditions.checkNotBlank(filePath); Preconditions.checkNotNull(onUpdate); // Read the file and make the initial onUpdate call. File file = new File(filePath); ByteSource byteSource = Files.asByteSource(file); onUpdate.apply(byteSource.read()); // Add the file to our map if it isn't already there, and register the new change watcher. ConfigFileInfo configFileInfo = watchedFileMap.get(filePath); if (configFileInfo == null) { configFileInfo = new ConfigFileInfo(file.lastModified(), byteSource.hash(HASH_FUNCTION)); watchedFileMap.put(filePath, configFileInfo); } configFileInfo.changeWatchers.add(onUpdate); }
Example #2
Source File: ConfigFileWatcher.java From pinlater with Apache License 2.0 | 6 votes |
/** * Adds a watch on the specified file. The file must exist, otherwise a FileNotFoundException * is returned. If the file is deleted after a watch is established, the watcher will log errors * but continue to monitor it, and resume watching if it is recreated. * * @param filePath path to the file to watch. * @param onUpdate function to call when a change is detected to the file. The entire contents * of the file will be passed in to the function. Note that onUpdate will be * called once before this call completes, which facilities initial load of data. * This callback is executed synchronously on the watcher thread - it is * important that the function be non-blocking. */ public synchronized void addWatch(String filePath, Function<byte[], Void> onUpdate) throws IOException { MorePreconditions.checkNotBlank(filePath); Preconditions.checkNotNull(onUpdate); // Read the file and make the initial onUpdate call. File file = new File(filePath); ByteSource byteSource = Files.asByteSource(file); onUpdate.apply(byteSource.read()); // Add the file to our map if it isn't already there, and register the new change watcher. ConfigFileInfo configFileInfo = watchedFileMap.get(filePath); if (configFileInfo == null) { configFileInfo = new ConfigFileInfo(file.lastModified(), byteSource.hash(HASH_FUNCTION)); watchedFileMap.put(filePath, configFileInfo); } configFileInfo.changeWatchers.add(onUpdate); }
Example #3
Source File: BKLogHandler.java From distributedlog with Apache License 2.0 | 6 votes |
/** * Get a count of records between beginDLSN and the end of the stream. * * @param beginDLSN dlsn marking the start of the range * @return the count of records present in the range */ public Future<Long> asyncGetLogRecordCount(final DLSN beginDLSN) { return checkLogStreamExistsAsync().flatMap(new Function<Void, Future<Long>>() { public Future<Long> apply(Void done) { return asyncGetFullLedgerList(true, false).flatMap(new Function<List<LogSegmentMetadata>, Future<Long>>() { public Future<Long> apply(List<LogSegmentMetadata> ledgerList) { List<Future<Long>> futureCounts = new ArrayList<Future<Long>>(ledgerList.size()); for (LogSegmentMetadata ledger : ledgerList) { if (ledger.getLogSegmentSequenceNumber() >= beginDLSN.getLogSegmentSequenceNo()) { futureCounts.add(asyncGetLogRecordCount(ledger, beginDLSN)); } } return Future.collect(futureCounts).map(new Function<List<Long>, Long>() { public Long apply(List<Long> counts) { return sum(counts); } }); } }); } }); }
Example #4
Source File: PinLaterRedisBackend.java From pinlater with Apache License 2.0 | 6 votes |
@Override protected void createQueueImpl(final String queueName) throws Exception { // Add the queueName to the queueNames sorted set in each shard. final double currentTimeSeconds = System.currentTimeMillis() / 1000.0; for (final ImmutableMap.Entry<String, RedisPools> shard : shardMap.entrySet()) { final String queueNamesRedisKey = RedisBackendUtils.constructQueueNamesRedisKey( shard.getKey()); RedisUtils.executeWithConnection( shard.getValue().getGeneralRedisPool(), new Function<Jedis, Void>() { @Override public Void apply(Jedis conn) { if (conn.zscore(queueNamesRedisKey, queueName) == null) { conn.zadd(queueNamesRedisKey, currentTimeSeconds, queueName); } return null; } }); } reloadQueueNames(); }
Example #5
Source File: PinLaterRedisBackend.java From pinlater with Apache License 2.0 | 6 votes |
/** * Clean up all the keys in each shard. This method is only for test use. */ @VisibleForTesting public Future<Void> cleanUpAllShards() { return futurePool.apply(new ExceptionalFunction0<Void>() { @Override public Void applyE() throws Throwable { for (final ImmutableMap.Entry<String, RedisPools> shard : shardMap.entrySet()) { RedisUtils.executeWithConnection( shard.getValue().getGeneralRedisPool(), new Function<Jedis, Void>() { @Override public Void apply(Jedis conn) { conn.flushAll(); return null; } }); } return null; } }); }
Example #6
Source File: PinLaterRedisBackend.java From pinlater with Apache License 2.0 | 6 votes |
/** * Reload queue names from redis to local cache. */ private synchronized void reloadQueueNames() throws Exception { ImmutableSet.Builder<String> builder = new ImmutableSet.Builder<String>(); if (!shardMap.isEmpty()) { final Map.Entry<String, RedisPools> randomShard = getRandomShard(true); if (randomShard == null) { throw new PinLaterException(ErrorCode.NO_HEALTHY_SHARDS, "Unable to find healthy shard"); } Set<String> newQueueNames = RedisUtils.executeWithConnection( randomShard.getValue().getGeneralRedisPool(), new Function<Jedis, Set<String>>() { @Override public Set<String> apply(Jedis conn) { return RedisBackendUtils.getQueueNames(conn, randomShard.getKey()); } }); builder.addAll(newQueueNames); } queueNames.set(builder.build()); }
Example #7
Source File: PinLaterRedisBackend.java From pinlater with Apache License 2.0 | 6 votes |
/** * Remove the job hash from redis. This function is used in test to simulate the case where the * job id is still in the queue, while the job hash is evicted by redis LRU. */ @VisibleForTesting public Future<Void> removeJobHash(String jobDescriptor) { final PinLaterJobDescriptor jobDesc = new PinLaterJobDescriptor(jobDescriptor); return futurePool.apply(new ExceptionalFunction0<Void>() { @Override public Void applyE() throws Throwable { RedisUtils.executeWithConnection( shardMap.get(jobDesc.getShardName()).getGeneralRedisPool(), new Function<Jedis, Void>() { @Override public Void apply(Jedis conn) { String hashRedisKey = RedisBackendUtils.constructHashRedisKey( jobDesc.getQueueName(), jobDesc.getShardName(), jobDesc.getLocalId()); conn.del(hashRedisKey); return null; } }); return null; } }); }
Example #8
Source File: JedisClientHelper.java From pinlater with Apache License 2.0 | 6 votes |
public boolean clientPing(JedisPool jedisPool) { boolean result = false; try { result = RedisUtils.executeWithConnection( jedisPool, new Function<Jedis, String>() { @Override public String apply(Jedis conn) { return conn.ping(); } } ).equals("PONG"); } catch (Exception e) { // failed ping } return result; }
Example #9
Source File: RedisUtils.java From pinlater with Apache License 2.0 | 6 votes |
/** * Gets the connection from the connection pool and adds the wrapper catch/finally block for the * given function. * * This helper method saves the trouble of dealing with redis connection. When we got * JedisConnectionException, we will discard this connection. Otherwise, we return the connection * to the connection pool. * * @param jedisPool Jedis connection pool * @param redisDBNum Redis DB number (index) (if redisDBNum == -1, don't select a DB ) * @param func The function to execute inside the catch/finally block. * @return A Resp object, which is the return value of wrapped function. */ public static <Resp> Resp executeWithConnection(JedisPool jedisPool, int redisDBNum, Function<Jedis, Resp> func) { Preconditions.checkNotNull(jedisPool); Preconditions.checkNotNull(func); Jedis conn = null; boolean gotJedisConnException = false; try { conn = jedisPool.getResource(); selectRedisDB(conn, redisDBNum); return func.apply(conn); } catch (JedisConnectionException e) { jedisPool.returnBrokenResource(conn); gotJedisConnException = true; throw e; } finally { if (conn != null && !gotJedisConnException) { jedisPool.returnResource(conn); } } }
Example #10
Source File: PinLaterBackendUtils.java From pinlater with Apache License 2.0 | 6 votes |
/** * Executes a batch of requests asynchronously in a partitioned manner, * with the specified parallelism. * * @param requests List of requests to execute. * @param parallelism Desired parallelism (must be > 0). * @param executeBatch Function to execute each partitioned batch of requests. * @param <Req> Request type. * @param <Resp> Response type. * @return List of response futures. */ public static <Req, Resp> List<Future<Resp>> executePartitioned( List<Req> requests, int parallelism, Function<List<Req>, Future<Resp>> executeBatch) { MorePreconditions.checkNotBlank(requests); Preconditions.checkArgument(parallelism > 0); Preconditions.checkNotNull(executeBatch); int sizePerPartition = Math.max(requests.size() / parallelism, 1); List<List<Req>> partitions = Lists.partition(requests, sizePerPartition); List<Future<Resp>> futures = Lists.newArrayListWithCapacity(partitions.size()); for (final List<Req> request : partitions) { futures.add(executeBatch.apply(request)); } return futures; }
Example #11
Source File: PinLaterServiceImpl.java From pinlater with Apache License 2.0 | 6 votes |
@Override public Future<PinLaterDequeueResponse> dequeueJobs( RequestContext context, final PinLaterDequeueRequest request) { if (!queueConfig.allowDequeue(request.getQueueName(), request.getLimit())) { Stats.incr(request.getQueueName() + "_dequeue_requests_rate_limited"); return Future.exception(new PinLaterException(ErrorCode.DEQUEUE_RATE_LIMITED, "Dequeue rate limit exceeded for queue: " + request.getQueueName())); } return Stats.timeFutureMillis( "PinLaterService.dequeueJobs", backend.dequeueJobs(context.getSource(), request).onSuccess( new Function<PinLaterDequeueResponse, BoxedUnit>() { @Override public BoxedUnit apply(PinLaterDequeueResponse response) { Stats.incr(request.getQueueName() + "_dequeue", response.getJobsSize()); return null; } }).rescue(new LogAndWrapException<PinLaterDequeueResponse>( context, "dequeueJobs", request.toString()))); }
Example #12
Source File: PinLaterServiceImpl.java From pinlater with Apache License 2.0 | 6 votes |
@Override public Future<Void> ackDequeuedJobs(RequestContext context, final PinLaterJobAckRequest request) { return Stats.timeFutureMillis( "PinLaterService.ackDequeuedJobs", backend.ackDequeuedJobs(request).onSuccess( new Function<Void, BoxedUnit>() { @Override public BoxedUnit apply(Void aVoid) { Stats.incr(request.getQueueName() + "_ack_succeeded", request.getJobsSucceededSize()); Stats.incr(request.getQueueName() + "_ack_failed", request.getJobsFailedSize()); return null; } }).rescue(new LogAndWrapException<Void>(context, "ackDequeuedJobs", request.toString()))); }
Example #13
Source File: PinLaterQueryIssuer.java From pinlater with Apache License 2.0 | 5 votes |
private void issueEnqueueRequests(PinLater.ServiceIface iface) throws InterruptedException { Preconditions.checkNotNull(queueName, "Queue was not specified."); final AtomicLong queriesIssued = new AtomicLong(0); final Semaphore permits = new Semaphore(concurrency); while (numQueries == -1 || queriesIssued.get() < numQueries) { final PinLaterEnqueueRequest request = new PinLaterEnqueueRequest(); request.setQueueName(queueName); for (int i = 0; i < batchSize; i++) { PinLaterJob job = new PinLaterJob(ByteBuffer.wrap( new String("task_" + random.nextInt(Integer.MAX_VALUE)).getBytes())); job.setPriority(priority); request.addToJobs(job); } final long startTimeNanos = System.nanoTime(); queriesIssued.incrementAndGet(); permits.acquire(); iface.enqueueJobs(REQUEST_CONTEXT, request).respond( new Function<Try<PinLaterEnqueueResponse>, BoxedUnit>() { @Override public BoxedUnit apply(Try<PinLaterEnqueueResponse> responseTry) { permits.release(); statsLogger.requestComplete( Duration.fromNanoseconds(System.nanoTime() - startTimeNanos)); if (responseTry.isThrow()) { LOG.info("Exception for request: " + request + " : " + ((Throw) responseTry).e()); } return BoxedUnit.UNIT; } }); } permits.acquire(concurrency); LOG.info("Enqueue queries issued: " + queriesIssued); }
Example #14
Source File: PinLaterServiceImpl.java From pinlater with Apache License 2.0 | 5 votes |
@Override public Future<PinLaterEnqueueResponse> enqueueJobs( RequestContext context, final PinLaterEnqueueRequest request) { return Stats.timeFutureMillis( "PinLaterService.enqueueJobs", backend.enqueueJobs(request).onSuccess( new Function<PinLaterEnqueueResponse, BoxedUnit>() { @Override public BoxedUnit apply(PinLaterEnqueueResponse response) { Stats.incr(request.getQueueName() + "_enqueue", request.getJobsSize()); return null; } }).rescue(new LogAndWrapException<PinLaterEnqueueResponse>( context, "enqueueJobs", request.toString()))); }
Example #15
Source File: PinLaterRedisBackend.java From pinlater with Apache License 2.0 | 5 votes |
@Override protected int retryFailedJobsFromShard( final String queueName, final String shardName, final int priority, final int attemptsRemaining, final long runAfterTimestampMillis, final int limit) throws Exception { // Skip the shard if it is unhealthy. if (!healthChecker.isServerLive( shardMap.get(shardName).getHost(), shardMap.get(shardName).getPort())) { return 0; } return RedisUtils.executeWithConnection( shardMap.get(shardName).getGeneralRedisPool(), new Function<Jedis, Integer>() { @Override public Integer apply(Jedis conn) { String failedQueueRedisKey = RedisBackendUtils.constructQueueRedisKey( queueName, shardName, priority, PinLaterJobState.FAILED); String pendingQueueRedisKey = RedisBackendUtils.constructQueueRedisKey( queueName, shardName, priority, PinLaterJobState.PENDING); String hashRedisKeyPrefix = RedisBackendUtils.constructHashRedisKeyPrefix( queueName, shardName); List<String> keys = Lists.newArrayList( failedQueueRedisKey, pendingQueueRedisKey, hashRedisKeyPrefix); List<String> argv = Lists.newArrayList( String.valueOf(runAfterTimestampMillis / 1000.0), String.valueOf(limit), String.valueOf(attemptsRemaining)); Object result = conn.eval(RedisLuaScripts.RETRY_JOBS, keys, argv); return ((Long) result).intValue(); } }); }
Example #16
Source File: PinLaterRedisBackend.java From pinlater with Apache License 2.0 | 5 votes |
@Override protected void deleteQueueImpl(final String queueName) throws Exception { for (final ImmutableMap.Entry<String, RedisPools> shard : shardMap.entrySet()) { final String queueNamesRedisKey = RedisBackendUtils.constructQueueNamesRedisKey( shard.getKey()); RedisUtils.executeWithConnection( shard.getValue().getGeneralRedisPool(), new Function<Jedis, Void>() { @Override public Void apply(Jedis conn) { // We will delete the queue from the queueNames sorted set, and delete all the jobs // in the pending and in_progress queues // We intentionally do not delete the jobs in succeeded and failed queues to avoid // blocking redis. In the end, those jobs will be garbage collected. // There is chance that we have pending jobs again when there are indeed in progress // jobs and they get ack'ed as failure before we delete the in progress queue. But // since in practice, we won't delete queues until we know for sure no one is // enqueuing or dequeuing them, this is not an issue. conn.zrem(queueNamesRedisKey, queueName); List<PinLaterJobState> jobStatesToDelete = Lists.newArrayList( PinLaterJobState.PENDING, PinLaterJobState.IN_PROGRESS); for (int priority = 1; priority <= numPriorityLevels; priority++) { for (PinLaterJobState jobState : jobStatesToDelete) { String queueRedisKey = RedisBackendUtils.constructQueueRedisKey( queueName, shard.getKey(), priority, jobState); String hashRedisKeyPrefix = RedisBackendUtils.constructHashRedisKeyPrefix( queueName, shard.getKey()); List<String> keys = Lists.newArrayList(queueRedisKey, hashRedisKeyPrefix); List<String> args = Lists.newArrayList(); conn.eval(RedisLuaScripts.DELETE_QUEUE, keys, args); } } return null; } }); } reloadQueueNames(); }
Example #17
Source File: PinLaterServiceImpl.java From pinlater with Apache License 2.0 | 5 votes |
@Override public Future<Void> checkpointJobs(RequestContext context, final PinLaterCheckpointJobsRequest request) { return Stats.timeFutureMillis( "PinLaterService.checkpointJobs", backend.checkpointJobs(context.getSource(), request).onSuccess( new Function<Void, BoxedUnit>() { @Override public BoxedUnit apply(Void aVoid) { Stats.incr(request.getQueueName() + "_checkpoint", request.getRequestsSize()); return null; } }).rescue(new LogAndWrapException<Void>( context, "checkpointJobs", request.toString()))); }
Example #18
Source File: PinLaterBackendBase.java From pinlater with Apache License 2.0 | 5 votes |
public Future<Void> checkpointJobs(final String source, final PinLaterCheckpointJobsRequest request) { // Partition the requests such that there are roughly <queryParallelism> partitions. Then // execute those in parallel. Within each partition, each checkpoint is executed serially. List<Future<Void>> futures = Lists.newArrayList(); if (request.getRequestsSize() > 0) { futures.addAll(PinLaterBackendUtils.executePartitioned( request.getRequests(), queryParallelism, new Function<List<PinLaterCheckpointJobRequest>, Future<Void>>() { @Override public Future<Void> apply(final List<PinLaterCheckpointJobRequest> checkpointRequests) { return futurePool.apply(new ExceptionalFunction0<Void>() { @Override public Void applyE() throws Throwable { for (PinLaterCheckpointJobRequest checkpointRequest : checkpointRequests) { checkpointSingleJob(source, request.getQueueName(), checkpointRequest, numAutoRetries); } return null; } }); } })); } return Future.collect(futures).voided(); }
Example #19
Source File: PinLaterBackendBase.java From pinlater with Apache License 2.0 | 5 votes |
public Future<Map<String, PinLaterJobInfo>> lookupJobs(final PinLaterLookupJobRequest request) { List<Future<Pair<String, PinLaterJobInfo>>> lookupJobFutures = Lists.newArrayListWithCapacity(request.getJobDescriptorsSize()); for (final String jobDescriptor : request.getJobDescriptors()) { Future<Pair<String, PinLaterJobInfo>> lookupJobFuture = futurePool.apply( new ExceptionalFunction0<Pair<String, PinLaterJobInfo>>() { @Override public Pair<String, PinLaterJobInfo> applyE() throws Throwable { PinLaterJobDescriptor jobDesc = new PinLaterJobDescriptor(jobDescriptor); PinLaterJobInfo jobInfo = lookupJobFromShard( jobDesc.getQueueName(), jobDesc.getShardName(), jobDesc.getPriority(), jobDesc.getLocalId(), request.isIncludeBody()); return new Pair<String, PinLaterJobInfo>(jobDescriptor, jobInfo); } }); lookupJobFutures.add(lookupJobFuture); } return Future.collect(lookupJobFutures).map( new Function<List<Pair<String, PinLaterJobInfo>>, Map<String, PinLaterJobInfo>>() { @Override public Map<String, PinLaterJobInfo> apply(List<Pair<String, PinLaterJobInfo>> jobPairs) { Map<String, PinLaterJobInfo> lookupJobMap = Maps.newHashMap(); for (Pair<String, PinLaterJobInfo> jobPair : jobPairs) { if (jobPair.getSecond() != null) { lookupJobMap.put(jobPair.getFirst(), jobPair.getSecond()); } } return lookupJobMap; } }); }
Example #20
Source File: PinLaterBackendBase.java From pinlater with Apache License 2.0 | 5 votes |
public Future<PinLaterDequeueResponse> dequeueJobs(final String source, final PinLaterDequeueRequest request) { Future<PinLaterDequeueResponse> dequeueFuture; try { dequeueFuture = dequeueSemaphoreMap.get(request.getQueueName()).acquire().flatMap( new Function<Permit, Future<PinLaterDequeueResponse>>() { @Override public Future<PinLaterDequeueResponse> apply(final Permit permit) { return futurePool.apply(new ExceptionalFunction0<PinLaterDequeueResponse>() { @Override public PinLaterDequeueResponse applyE() throws Throwable { return dequeueJobsImpl(source, request, numAutoRetries); } }).respond(new Function<Try<PinLaterDequeueResponse>, BoxedUnit>() { @Override public BoxedUnit apply(Try<PinLaterDequeueResponse> responseTry) { permit.release(); return BoxedUnit.UNIT; } }); } }); } catch (ExecutionException e) { // The dequeueSemaphoreMap's get() can in theory throw an ExecutionException, but we // never expect it in practice since our load method is simply new'ing up an AsyncSemaphore. dequeueFuture = Future.exception(e); } // Dequeue requests can contain ack requests as payloads. If so, we execute both in parallel. Future<Void> ackFuture = request.isSetJobAckRequest() ? ackDequeuedJobsImpl(request.getJobAckRequest()) : Future.Void(); return dequeueFuture.join(ackFuture).map( new Function<Tuple2<PinLaterDequeueResponse, Void>, PinLaterDequeueResponse>() { @Override public PinLaterDequeueResponse apply(Tuple2<PinLaterDequeueResponse, Void> tuple) { return tuple._1(); } }); }
Example #21
Source File: PinLaterBackendBase.java From pinlater with Apache License 2.0 | 5 votes |
public Future<Integer> getJobCount(final PinLaterGetJobCountRequest request) { // If no priority is specified, search for jobs of all priorities. Range<Integer> priorityRange = request.isSetPriority() ? Range.closed((int) request.getPriority(), (int) request.getPriority()) : Range.closed(1, numPriorityLevels); final ContiguousSet<Integer> priorities = ContiguousSet.create(priorityRange, DiscreteDomain.integers()); // Execute count query on each shard in parallel. List<Future<Integer>> futures = Lists.newArrayListWithCapacity(getShards().size()); for (final String shardName : getShards()) { futures.add(futurePool.apply(new ExceptionalFunction0<Integer>() { @Override public Integer applyE() throws Throwable { return getJobCountFromShard( request.getQueueName(), shardName, priorities, request.getJobState(), request.isCountFutureJobs(), request.getBodyRegexToMatch()); } })); } return Future.collect(futures).map( new Function<List<Integer>, Integer>() { @Override public Integer apply(List<Integer> shardCounts) { int totalCount = 0; for (Integer shardCount : shardCounts) { totalCount += shardCount; } return totalCount; } }); }
Example #22
Source File: Decider.java From singer with Apache License 2.0 | 5 votes |
@VisibleForTesting Decider(ConfigFileWatcher watcher, String filePath) { try { watcher.addWatch(filePath, new Function<byte[], Void>() { public Void apply(byte[] deciderJson) { Map<String, Integer> newDeciderMap = GSON.fromJson(new String(deciderJson), new TypeToken<HashMap<String, Integer>>() {} .getType()); if (newDeciderMap != null) { if (newDeciderMap.isEmpty()) { LOG.warn("Got empty decider set."); } mDeciderMap = newDeciderMap; } else { LOG.warn("Got a null object from decider json."); } return null; } }); } catch (IOException e) { // Initialize with an empty map. mDeciderMap = Maps.newHashMap(); LOG.warn("Exception while initializing decider.", e); Stats.incr("decider_config_file_ioexception"); } this.rand.setSeed(0); }
Example #23
Source File: FutureUtil.java From terrapin with Apache License 2.0 | 5 votes |
public static <T> Future<Try<T>> lifeToTry(Future<T> future) { return future.map(new Function<T, Try<T>>() { @Override public Try<T> apply(T o) { return new Return(o); } }).handle(new Function<Throwable, Try<T>>() { @Override public Try<T> apply(Throwable throwable) { return new Throw(throwable); } }); }
Example #24
Source File: FutureUtils.java From distributedlog with Apache License 2.0 | 5 votes |
/** * Process the list of items one by one using the process function <i>processFunc</i>. * The process will be stopped immediately if it fails on processing any one. * * @param collection list of items * @param processFunc process function * @param callbackExecutor executor to process the item * @return future presents the list of processed results */ public static <T, R> Future<List<R>> processList(List<T> collection, Function<T, Future<R>> processFunc, @Nullable ExecutorService callbackExecutor) { ListFutureProcessor<T, R> processor = new ListFutureProcessor<T, R>(collection, processFunc, callbackExecutor); if (null != callbackExecutor) { callbackExecutor.submit(processor); } else { processor.run(); } return processor.promise; }
Example #25
Source File: BKLogWriteHandler.java From distributedlog with Apache License 2.0 | 5 votes |
private Future<List<LogSegmentMetadata>> deleteLogSegments( final List<LogSegmentMetadata> logs) { if (LOG.isTraceEnabled()) { LOG.trace("Purging logs for {} : {}", getFullyQualifiedName(), logs); } return FutureUtils.processList(logs, new Function<LogSegmentMetadata, Future<LogSegmentMetadata>>() { @Override public Future<LogSegmentMetadata> apply(LogSegmentMetadata segment) { return deleteLogSegment(segment); } }, scheduler); }
Example #26
Source File: BKLogWriteHandler.java From distributedlog with Apache License 2.0 | 5 votes |
Future<List<LogSegmentMetadata>> purgeLogSegmentsOlderThanTimestamp(final long minTimestampToKeep) { if (minTimestampToKeep >= Utils.nowInMillis()) { return Future.exception(new IllegalArgumentException( "Invalid timestamp " + minTimestampToKeep + " to purge logs for " + getFullyQualifiedName())); } return asyncGetFullLedgerList(false, false).flatMap( new Function<List<LogSegmentMetadata>, Future<List<LogSegmentMetadata>>>() { @Override public Future<List<LogSegmentMetadata>> apply(List<LogSegmentMetadata> logSegments) { List<LogSegmentMetadata> purgeList = new ArrayList<LogSegmentMetadata>(logSegments.size()); int numCandidates = getNumCandidateLogSegmentsToTruncate(logSegments); for (int iterator = 0; iterator < numCandidates; iterator++) { LogSegmentMetadata l = logSegments.get(iterator); // When application explicitly truncates segments; timestamp based purge is // only used to cleanup log segments that have been marked for truncation if ((l.isTruncated() || !conf.getExplicitTruncationByApplication()) && !l.isInProgress() && (l.getCompletionTime() < minTimestampToKeep)) { purgeList.add(l); } else { // stop truncating log segments if we find either an inprogress or a partially // truncated log segment break; } } LOG.info("Deleting log segments older than {} for {} : {}", new Object[] { minTimestampToKeep, getFullyQualifiedName(), purgeList }); return deleteLogSegments(purgeList); } }); }
Example #27
Source File: BKLogReadHandler.java From distributedlog with Apache License 2.0 | 5 votes |
private Future<Void> ensureReadLockPathExist() { final Promise<Void> promise = new Promise<Void>(); promise.setInterruptHandler(new com.twitter.util.Function<Throwable, BoxedUnit>() { @Override public BoxedUnit apply(Throwable t) { FutureUtils.setException(promise, new LockCancelledException(readLockPath, "Could not ensure read lock path", t)); return null; } }); Optional<String> parentPathShouldNotCreate = Optional.of(logMetadata.getLogRootPath()); Utils.zkAsyncCreateFullPathOptimisticRecursive(zooKeeperClient, readLockPath, parentPathShouldNotCreate, new byte[0], zooKeeperClient.getDefaultACL(), CreateMode.PERSISTENT, new org.apache.zookeeper.AsyncCallback.StringCallback() { @Override public void processResult(final int rc, final String path, Object ctx, String name) { scheduler.submit(new Runnable() { @Override public void run() { if (KeeperException.Code.NONODE.intValue() == rc) { FutureUtils.setException(promise, new LogNotFoundException(String.format("Log %s does not exist or has been deleted", getFullyQualifiedName()))); } else if (KeeperException.Code.OK.intValue() == rc) { FutureUtils.setValue(promise, null); LOG.trace("Created path {}.", path); } else if (KeeperException.Code.NODEEXISTS.intValue() == rc) { FutureUtils.setValue(promise, null); LOG.trace("Path {} is already existed.", path); } else if (DistributedLogConstants.ZK_CONNECTION_EXCEPTION_RESULT_CODE == rc) { FutureUtils.setException(promise, new ZooKeeperClient.ZooKeeperConnectionException(path)); } else if (DistributedLogConstants.DL_INTERRUPTED_EXCEPTION_RESULT_CODE == rc) { FutureUtils.setException(promise, new DLInterruptedException(path)); } else { FutureUtils.setException(promise, KeeperException.create(KeeperException.Code.get(rc))); } } }); } }, null); return promise; }
Example #28
Source File: FutureUtils.java From distributedlog with Apache License 2.0 | 5 votes |
ListFutureProcessor(List<T> items, Function<T, Future<R>> processFunc, ExecutorService callbackExecutor) { this.itemsIter = items.iterator(); this.processFunc = processFunc; this.promise = new Promise<List<R>>(); this.promise.setInterruptHandler(this); this.results = new ArrayList<R>(); this.callbackExecutor = callbackExecutor; }
Example #29
Source File: DistributedLogClientImpl.java From distributedlog with Apache License 2.0 | 5 votes |
@Override public Future<Void> setAcceptNewStream(boolean enabled) { Map<SocketAddress, ProxyClient> snapshot = clientManager.getAllClients(); List<Future<Void>> futures = new ArrayList<Future<Void>>(snapshot.size()); for (Map.Entry<SocketAddress, ProxyClient> entry : snapshot.entrySet()) { futures.add(entry.getValue().getService().setAcceptNewStream(enabled)); } return Future.collect(futures).map(new Function<List<Void>, Void>() { @Override public Void apply(List<Void> list) { return null; } }); }
Example #30
Source File: ReaderWorker.java From distributedlog with Apache License 2.0 | 5 votes |
@Override public void onFailure(Throwable cause) { scheduleReinitStream(streamIdx).map(new Function<Void, Void>() { @Override public Void apply(Void value) { prevDLSN = null; prevSequenceId = Long.MIN_VALUE; readLoop(); return null; } }); }