Java Code Examples for org.elasticsearch.common.StopWatch#stop()
The following examples show how to use
org.elasticsearch.common.StopWatch#stop() .
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: DLBasedIndexRecoverySourceHandler.java From Elasticsearch with Apache License 2.0 | 6 votes |
protected void prepareTargetForTranslog() { StopWatch stopWatch = new StopWatch().start(); logger.trace("{} recovery [phase1] to {}: prepare remote engine for translog", request.shardId(), request.targetNode()); final long startEngineStart = stopWatch.totalTime().millis(); cancellableThreads.execute(new Interruptable() { @Override public void run() throws InterruptedException { // Send a request preparing the new shard's translog to receive // operations. This ensures the shard engine is started and disables // garbage collection (not the JVM's GC!) of tombstone deletes transportService.submitRequest(request.targetNode(), RecoveryTarget.Actions.PREPARE_TRANSLOG, new RecoveryPrepareForTranslogOperationsRequest(request.recoveryId(), request.shardId(), 0), TransportRequestOptions.builder().withTimeout(recoverySettings.internalActionTimeout()).build(), EmptyTransportResponseHandler.INSTANCE_SAME).txGet(); } }); stopWatch.stop(); response.startTime = stopWatch.totalTime().millis() - startEngineStart; logger.trace("{} recovery [phase1] to {}: remote engine start took [{}]", request.shardId(), request.targetNode(), stopWatch.totalTime()); }
Example 2
Source File: BlobRecoverySourceHandler.java From Elasticsearch with Apache License 2.0 | 6 votes |
protected void prepareTargetForTranslog(final Translog.View translogView) { StopWatch stopWatch = new StopWatch().start(); logger.trace("{} recovery [phase1] to {}: prepare remote engine for translog", request.shardId(), request.targetNode()); final long startEngineStart = stopWatch.totalTime().millis(); cancellableThreads.execute(new Interruptable() { @Override public void run() throws InterruptedException { // Send a request preparing the new shard's translog to receive // operations. This ensures the shard engine is started and disables // garbage collection (not the JVM's GC!) of tombstone deletes transportService.submitRequest(request.targetNode(), RecoveryTarget.Actions.PREPARE_TRANSLOG, new RecoveryPrepareForTranslogOperationsRequest(request.recoveryId(), request.shardId(), translogView.totalOperations()), TransportRequestOptions.builder().withTimeout(recoverySettings.internalActionTimeout()).build(), EmptyTransportResponseHandler.INSTANCE_SAME).txGet(); } }); stopWatch.stop(); response.startTime = stopWatch.totalTime().millis() - startEngineStart; logger.trace("{} recovery [phase1] to {}: remote engine start took [{}]", request.shardId(), request.targetNode(), stopWatch.totalTime()); }
Example 3
Source File: BlobRecoverySourceHandler.java From Elasticsearch with Apache License 2.0 | 6 votes |
/** * Perform phase2 of the recovery process * <p/> * Phase2 takes a snapshot of the current translog *without* acquiring the * write lock (however, the translog snapshot is a point-in-time view of * the translog). It then sends each translog operation to the target node * so it can be replayed into the new shard. */ public void phase2(Translog.Snapshot snapshot) { if (shard.state() == IndexShardState.CLOSED) { throw new IndexShardClosedException(request.shardId()); } cancellableThreads.checkForCancel(); StopWatch stopWatch = new StopWatch().start(); logger.trace("{} recovery [phase2] to {}: sending transaction log operations", request.shardId(), request.targetNode()); // Send all the snapshot's translog operations to the target int totalOperations = sendSnapshot(snapshot); stopWatch.stop(); logger.trace("{} recovery [phase2] to {}: took [{}]", request.shardId(), request.targetNode(), stopWatch.totalTime()); response.phase2Time = stopWatch.totalTime().millis(); response.phase2Operations = totalOperations; }
Example 4
Source File: RecoverySourceHandler.java From Elasticsearch with Apache License 2.0 | 6 votes |
protected void prepareTargetForTranslog(final Translog.View translogView) { StopWatch stopWatch = new StopWatch().start(); logger.trace("{} recovery [phase1] to {}: prepare remote engine for translog", request.shardId(), request.targetNode()); final long startEngineStart = stopWatch.totalTime().millis(); cancellableThreads.execute(new Interruptable() { @Override public void run() throws InterruptedException { // Send a request preparing the new shard's translog to receive // operations. This ensures the shard engine is started and disables // garbage collection (not the JVM's GC!) of tombstone deletes transportService.submitRequest(request.targetNode(), RecoveryTarget.Actions.PREPARE_TRANSLOG, new RecoveryPrepareForTranslogOperationsRequest(request.recoveryId(), request.shardId(), translogView.totalOperations()), TransportRequestOptions.builder().withTimeout(recoverySettings.internalActionTimeout()).build(), EmptyTransportResponseHandler.INSTANCE_SAME).txGet(); } }); stopWatch.stop(); response.startTime = stopWatch.totalTime().millis() - startEngineStart; logger.trace("{} recovery [phase1] to {}: remote engine start took [{}]", request.shardId(), request.targetNode(), stopWatch.totalTime()); }
Example 5
Source File: RecoverySourceHandler.java From Elasticsearch with Apache License 2.0 | 6 votes |
/** * Perform phase2 of the recovery process * <p> * Phase2 takes a snapshot of the current translog *without* acquiring the * write lock (however, the translog snapshot is a point-in-time view of * the translog). It then sends each translog operation to the target node * so it can be replayed into the new shard. */ public void phase2(Translog.Snapshot snapshot) { if (shard.state() == IndexShardState.CLOSED) { throw new IndexShardClosedException(request.shardId()); } cancellableThreads.checkForCancel(); StopWatch stopWatch = new StopWatch().start(); logger.trace("{} recovery [phase2] to {}: sending transaction log operations", request.shardId(), request.targetNode()); // Send all the snapshot's translog operations to the target int totalOperations = sendSnapshot(snapshot); stopWatch.stop(); logger.trace("{} recovery [phase2] to {}: took [{}]", request.shardId(), request.targetNode(), stopWatch.totalTime()); response.phase2Time = stopWatch.totalTime().millis(); response.phase2Operations = totalOperations; }
Example 6
Source File: OpenNlpService.java From elasticsearch-ingest-opennlp with Apache License 2.0 | 6 votes |
protected OpenNlpService start() { StopWatch sw = new StopWatch("models-loading"); Map<String, String> settingsMap = IngestOpenNlpPlugin.MODEL_FILE_SETTINGS.getAsMap(settings); for (Map.Entry<String, String> entry : settingsMap.entrySet()) { String name = entry.getKey(); sw.start(name); Path path = configDirectory.resolve(entry.getValue()); try (InputStream is = Files.newInputStream(path)) { nameFinderModels.put(name, new TokenNameFinderModel(is)); } catch (IOException e) { logger.error((Supplier<?>) () -> new ParameterizedMessage("Could not load model [{}] with path [{}]", name, path), e); } sw.stop(); } if (settingsMap.keySet().size() == 0) { logger.error("Did not load any models for ingest-opennlp plugin, none configured"); } else { logger.info("Read models in [{}] for {}", sw.totalTime(), settingsMap.keySet()); } return this; }
Example 7
Source File: DLBasedIndexRecoverySourceHandler.java From Elasticsearch with Apache License 2.0 | 5 votes |
public void finalizeRecovery() { if (shard.state() == IndexShardState.CLOSED) { throw new IndexShardClosedException(request.shardId()); } cancellableThreads.checkForCancel(); StopWatch stopWatch = new StopWatch().start(); logger.trace("[{}][{}] finalizing recovery to {}", indexName, shardId, request.targetNode()); cancellableThreads.execute(new Interruptable() { @Override public void run() throws InterruptedException { // Send the FINALIZE request to the target node. The finalize request // clears unreferenced translog files, refreshes the engine now that // new segments are available, and enables garbage collection of // tombstone files. The shard is also moved to the POST_RECOVERY phase // during this time transportService.submitRequest(request.targetNode(), RecoveryTarget.Actions.FINALIZE, new RecoveryFinalizeRecoveryRequest(request.recoveryId(), request.shardId()), TransportRequestOptions.builder().withTimeout(recoverySettings.internalActionLongTimeout()).build(), EmptyTransportResponseHandler.INSTANCE_SAME).txGet(); } }); if (request.markAsRelocated()) { // TODO what happens if the recovery process fails afterwards, we need to mark this back to started try { shard.relocated("to " + request.targetNode()); } catch (IllegalIndexShardStateException e) { // we can ignore this exception since, on the other node, when it moved to phase3 // it will also send shard started, which might cause the index shard we work against // to move be closed by the time we get to the the relocated method } } stopWatch.stop(); logger.trace("[{}][{}] finalizing recovery to {}: took [{}]", indexName, shardId, request.targetNode(), stopWatch.totalTime()); }
Example 8
Source File: BlobRecoverySourceHandler.java From Elasticsearch with Apache License 2.0 | 5 votes |
/** * finalizes the recovery process */ public void finalizeRecovery() { if (shard.state() == IndexShardState.CLOSED) { throw new IndexShardClosedException(request.shardId()); } cancellableThreads.checkForCancel(); StopWatch stopWatch = new StopWatch().start(); logger.trace("[{}][{}] finalizing recovery to {}", indexName, shardId, request.targetNode()); cancellableThreads.execute(new Interruptable() { @Override public void run() throws InterruptedException { // Send the FINALIZE request to the target node. The finalize request // clears unreferenced translog files, refreshes the engine now that // new segments are available, and enables garbage collection of // tombstone files. The shard is also moved to the POST_RECOVERY phase // during this time transportService.submitRequest(request.targetNode(), RecoveryTarget.Actions.FINALIZE, new RecoveryFinalizeRecoveryRequest(request.recoveryId(), request.shardId()), TransportRequestOptions.builder().withTimeout(recoverySettings.internalActionLongTimeout()).build(), EmptyTransportResponseHandler.INSTANCE_SAME).txGet(); } }); if (request.markAsRelocated()) { // TODO what happens if the recovery process fails afterwards, we need to mark this back to started try { shard.relocated("to " + request.targetNode()); } catch (IllegalIndexShardStateException e) { // we can ignore this exception since, on the other node, when it moved to phase3 // it will also send shard started, which might cause the index shard we work against // to move be closed by the time we get to the the relocated method } } stopWatch.stop(); logger.trace("[{}][{}] finalizing recovery to {}: took [{}]", indexName, shardId, request.targetNode(), stopWatch.totalTime()); }
Example 9
Source File: RecoverySourceHandler.java From Elasticsearch with Apache License 2.0 | 5 votes |
/** * finalizes the recovery process */ public void finalizeRecovery() { if (shard.state() == IndexShardState.CLOSED) { throw new IndexShardClosedException(request.shardId()); } cancellableThreads.checkForCancel(); StopWatch stopWatch = new StopWatch().start(); logger.trace("[{}][{}] finalizing recovery to {}", indexName, shardId, request.targetNode()); cancellableThreads.execute(new Interruptable() { @Override public void run() throws InterruptedException { // Send the FINALIZE request to the target node. The finalize request // clears unreferenced translog files, refreshes the engine now that // new segments are available, and enables garbage collection of // tombstone files. The shard is also moved to the POST_RECOVERY phase // during this time transportService.submitRequest(request.targetNode(), RecoveryTarget.Actions.FINALIZE, new RecoveryFinalizeRecoveryRequest(request.recoveryId(), request.shardId()), TransportRequestOptions.builder().withTimeout(recoverySettings.internalActionLongTimeout()).build(), EmptyTransportResponseHandler.INSTANCE_SAME).txGet(); } }); if (request.markAsRelocated()) { // TODO what happens if the recovery process fails afterwards, we need to mark this back to started try { shard.relocated("to " + request.targetNode()); } catch (IllegalIndexShardStateException e) { // we can ignore this exception since, on the other node, when it moved to phase3 // it will also send shard started, which might cause the index shard we work against // to move be closed by the time we get to the the relocated method } } stopWatch.stop(); logger.trace("[{}][{}] finalizing recovery to {}: took [{}]", indexName, shardId, request.targetNode(), stopWatch.totalTime()); }
Example 10
Source File: FlsPerfTest.java From deprecated-security-advanced-modules with Apache License 2.0 | 4 votes |
@Test public void testFlsPerfNamed() throws Exception { setup(); HttpResponse res; StopWatch sw = new StopWatch("testFlsPerfNamed"); sw.start("non fls"); Assert.assertEquals(HttpStatus.SC_OK, (res = rh.executeGetRequest("/deals/_search?pretty", encodeBasicHeader("admin", "admin"))).getStatusCode()); sw.stop(); Assert.assertTrue(res.getBody().contains("field1\"")); Assert.assertTrue(res.getBody().contains("field2\"")); Assert.assertTrue(res.getBody().contains("field50\"")); Assert.assertTrue(res.getBody().contains("field997\"")); sw.start("with fls"); Assert.assertEquals(HttpStatus.SC_OK, (res = rh.executeGetRequest("/deals/_search?pretty&size=1000", encodeBasicHeader("perf_named_only", "password"))).getStatusCode()); sw.stop(); Assert.assertFalse(res.getBody().contains("field1\"")); Assert.assertFalse(res.getBody().contains("field2\"")); Assert.assertTrue(res.getBody().contains("field50\"")); Assert.assertTrue(res.getBody().contains("field997\"")); sw.start("with fls 2 after warmup"); Assert.assertEquals(HttpStatus.SC_OK, (res = rh.executeGetRequest("/deals/_search?pretty&size=1000", encodeBasicHeader("perf_named_only", "password"))).getStatusCode()); sw.stop(); Assert.assertFalse(res.getBody().contains("field1\"")); Assert.assertFalse(res.getBody().contains("field2\"")); Assert.assertTrue(res.getBody().contains("field50\"")); Assert.assertTrue(res.getBody().contains("field997\"")); sw.start("with fls 3 after warmup"); Assert.assertEquals(HttpStatus.SC_OK, (res = rh.executeGetRequest("/deals/_search?pretty&size=1000", encodeBasicHeader("perf_named_only", "password"))).getStatusCode()); sw.stop(); Assert.assertFalse(res.getBody().contains("field1\"")); Assert.assertFalse(res.getBody().contains("field2\"")); Assert.assertTrue(res.getBody().contains("field50\"")); Assert.assertTrue(res.getBody().contains("field997\"")); System.out.println(sw.prettyPrint()); }
Example 11
Source File: FlsPerfTest.java From deprecated-security-advanced-modules with Apache License 2.0 | 4 votes |
@Test public void testFlsPerfWcEx() throws Exception { setup(); HttpResponse res; StopWatch sw = new StopWatch("testFlsPerfWcEx"); sw.start("non fls"); Assert.assertEquals(HttpStatus.SC_OK, (res = rh.executeGetRequest("/deals/_search?pretty", encodeBasicHeader("admin", "admin"))).getStatusCode()); sw.stop(); Assert.assertTrue(res.getBody().contains("field1\"")); Assert.assertTrue(res.getBody().contains("field2\"")); Assert.assertTrue(res.getBody().contains("field50\"")); Assert.assertTrue(res.getBody().contains("field997\"")); sw.start("with fls"); Assert.assertEquals(HttpStatus.SC_OK, (res = rh.executeGetRequest("/deals/_search?pretty&size=1000", encodeBasicHeader("perf_wc_ex", "password"))).getStatusCode()); sw.stop(); Assert.assertTrue(res.getBody().contains("field1\"")); Assert.assertTrue(res.getBody().contains("field2\"")); Assert.assertFalse(res.getBody().contains("field50\"")); Assert.assertFalse(res.getBody().contains("field997\"")); sw.start("with fls 2 after warmup"); Assert.assertEquals(HttpStatus.SC_OK, (res = rh.executeGetRequest("/deals/_search?pretty&size=1000", encodeBasicHeader("perf_wc_ex", "password"))).getStatusCode()); sw.stop(); Assert.assertTrue(res.getBody().contains("field1\"")); Assert.assertTrue(res.getBody().contains("field2\"")); Assert.assertFalse(res.getBody().contains("field50\"")); Assert.assertFalse(res.getBody().contains("field997\"")); sw.start("with fls 3 after warmup"); Assert.assertEquals(HttpStatus.SC_OK, (res = rh.executeGetRequest("/deals/_search?pretty&size=1000", encodeBasicHeader("perf_wc_ex", "password"))).getStatusCode()); sw.stop(); Assert.assertTrue(res.getBody().contains("field1\"")); Assert.assertTrue(res.getBody().contains("field2\"")); Assert.assertFalse(res.getBody().contains("field50\"")); Assert.assertFalse(res.getBody().contains("field997\"")); System.out.println(sw.prettyPrint()); }
Example 12
Source File: FlsPerfTest.java From deprecated-security-advanced-modules with Apache License 2.0 | 4 votes |
@Test public void testFlsPerfNamedEx() throws Exception { setup(); HttpResponse res; StopWatch sw = new StopWatch("testFlsPerfNamedEx"); sw.start("non fls"); Assert.assertEquals(HttpStatus.SC_OK, (res = rh.executeGetRequest("/deals/_search?pretty", encodeBasicHeader("admin", "admin"))).getStatusCode()); sw.stop(); Assert.assertTrue(res.getBody().contains("field1\"")); Assert.assertTrue(res.getBody().contains("field2\"")); Assert.assertTrue(res.getBody().contains("field50\"")); Assert.assertTrue(res.getBody().contains("field997\"")); sw.start("with fls"); Assert.assertEquals(HttpStatus.SC_OK, (res = rh.executeGetRequest("/deals/_search?pretty&size=1000", encodeBasicHeader("perf_named_ex", "password"))).getStatusCode()); sw.stop(); Assert.assertTrue(res.getBody().contains("field1\"")); Assert.assertTrue(res.getBody().contains("field2\"")); Assert.assertFalse(res.getBody().contains("field50\"")); Assert.assertFalse(res.getBody().contains("field997\"")); sw.start("with fls 2 after warmup"); Assert.assertEquals(HttpStatus.SC_OK, (res = rh.executeGetRequest("/deals/_search?pretty&size=1000", encodeBasicHeader("perf_named_ex", "password"))).getStatusCode()); sw.stop(); Assert.assertTrue(res.getBody().contains("field1\"")); Assert.assertTrue(res.getBody().contains("field2\"")); Assert.assertFalse(res.getBody().contains("field50\"")); Assert.assertFalse(res.getBody().contains("field997\"")); sw.start("with fls 3 after warmup"); Assert.assertEquals(HttpStatus.SC_OK, (res = rh.executeGetRequest("/deals/_search?pretty&size=1000", encodeBasicHeader("perf_named_ex", "password"))).getStatusCode()); sw.stop(); Assert.assertTrue(res.getBody().contains("field1\"")); Assert.assertTrue(res.getBody().contains("field2\"")); Assert.assertFalse(res.getBody().contains("field50\"")); Assert.assertFalse(res.getBody().contains("field997\"")); System.out.println(sw.prettyPrint()); }
Example 13
Source File: FlsPerfTest.java From deprecated-security-advanced-modules with Apache License 2.0 | 4 votes |
@Test public void testFlsWcIn() throws Exception { setup(); HttpResponse res; StopWatch sw = new StopWatch("testFlsWcIn"); sw.start("non fls"); Assert.assertEquals(HttpStatus.SC_OK, (res = rh.executeGetRequest("/deals/_search?pretty", encodeBasicHeader("admin", "admin"))).getStatusCode()); sw.stop(); Assert.assertTrue(res.getBody().contains("field1\"")); Assert.assertTrue(res.getBody().contains("field2\"")); Assert.assertTrue(res.getBody().contains("field50\"")); Assert.assertTrue(res.getBody().contains("field997\"")); sw.start("with fls"); Assert.assertEquals(HttpStatus.SC_OK, (res = rh.executeGetRequest("/deals/_search?pretty&size=1000", encodeBasicHeader("perf_wc_in", "password"))).getStatusCode()); sw.stop(); Assert.assertFalse(res.getBody().contains("field0\"")); Assert.assertTrue(res.getBody().contains("field50\"")); Assert.assertTrue(res.getBody().contains("field997\"")); sw.start("with fls 2 after warmup"); Assert.assertEquals(HttpStatus.SC_OK, (res = rh.executeGetRequest("/deals/_search?pretty&size=1000", encodeBasicHeader("perf_wc_in", "password"))).getStatusCode()); sw.stop(); Assert.assertFalse(res.getBody().contains("field0\"")); Assert.assertTrue(res.getBody().contains("field50\"")); Assert.assertTrue(res.getBody().contains("field997\"")); sw.start("with fls 3 after warmup"); Assert.assertEquals(HttpStatus.SC_OK, (res = rh.executeGetRequest("/deals/_search?pretty&size=1000", encodeBasicHeader("perf_wc_in", "password"))).getStatusCode()); sw.stop(); Assert.assertFalse(res.getBody().contains("field0\"")); Assert.assertTrue(res.getBody().contains("field50\"")); Assert.assertTrue(res.getBody().contains("field997\"")); System.out.println(sw.prettyPrint()); }
Example 14
Source File: BlobRecoveryHandler.java From Elasticsearch with Apache License 2.0 | 4 votes |
public void phase1() throws Exception { logger.debug("[{}][{}] recovery [phase1] to {}: start", request.shardId().index().name(), request.shardId().id(), request.targetNode().getName()); StopWatch stopWatch = new StopWatch().start(); blobTransferTarget.startRecovery(); blobTransferTarget.createActiveTransfersSnapshot(); sendStartRecoveryRequest(); final AtomicReference<Exception> lastException = new AtomicReference<Exception>(); try { syncVarFiles(lastException); } catch (InterruptedException ex) { throw new ElasticsearchException("blob recovery phase1 failed", ex); } Exception exception = lastException.get(); if (exception != null) { throw exception; } /** * as soon as the recovery starts the target node will receive PutChunkReplicaRequests * the target node will then request the bytes it is missing from the source node * (it is missing bytes from PutChunk/StartBlob requests that happened before the recovery) * here we need to block so that the target node has enough time to request the head chunks * * e.g. * Target Node receives Chunk X with bytes 10-19 * Target Node requests bytes 0-9 from Source Node * Source Node sends bytes 0-9 * Source Node sets transferTakenOver */ blobTransferTarget.waitForGetHeadRequests(GET_HEAD_TIMEOUT, TimeUnit.SECONDS); blobTransferTarget.createActivePutHeadChunkTransfersSnapshot(); /** * After receiving a getHeadRequest the source node starts to send HeadChunks to the target * wait for all PutHeadChunk-Runnables to finish before ending the recovery. */ blobTransferTarget.waitUntilPutHeadChunksAreFinished(); sendFinalizeRecoveryRequest(); blobTransferTarget.stopRecovery(); stopWatch.stop(); logger.debug("[{}][{}] recovery [phase1] to {}: took [{}]", request.shardId().index().name(), request.shardId().id(), request.targetNode().getName(), stopWatch.totalTime()); }
Example 15
Source File: BlobRecoveryHandler.java From crate with Apache License 2.0 | 4 votes |
@Override protected void blobRecoveryHook() throws Exception { LOGGER.debug("[{}][{}] recovery [phase1] to {}: start", request.shardId().getIndexName(), request.shardId().id(), request.targetNode().getName()); final StopWatch stopWatch = new StopWatch().start(); blobTransferTarget.startRecovery(); blobTransferTarget.createActiveTransfersSnapshot(); sendStartRecoveryRequest(); final AtomicReference<Exception> lastException = new AtomicReference<>(); try { syncVarFiles(lastException); } catch (InterruptedException ex) { throw new ElasticsearchException("blob recovery phase1 failed", ex); } Exception exception = lastException.get(); if (exception != null) { throw exception; } /* as soon as the recovery starts the target node will receive PutChunkReplicaRequests the target node will then request the bytes it is missing from the source node (it is missing bytes from PutChunk/StartBlob requests that happened before the recovery) here we need to block so that the target node has enough time to request the head chunks e.g. Target Node receives Chunk X with bytes 10-19 Target Node requests bytes 0-9 from Source Node Source Node sends bytes 0-9 Source Node sets transferTakenOver */ blobTransferTarget.waitForGetHeadRequests(GET_HEAD_TIMEOUT, TimeUnit.SECONDS); blobTransferTarget.createActivePutHeadChunkTransfersSnapshot(); /* After receiving a getHeadRequest the source node starts to send HeadChunks to the target wait for all PutHeadChunk-Runnables to finish before ending the recovery. */ blobTransferTarget.waitUntilPutHeadChunksAreFinished(); sendFinalizeRecoveryRequest(); blobTransferTarget.stopRecovery(); stopWatch.stop(); LOGGER.debug("[{}][{}] recovery [phase1] to {}: took [{}]", request.shardId().getIndexName(), request.shardId().id(), request.targetNode().getName(), stopWatch.totalTime()); }