Java Code Examples for org.apache.distributedlog.DistributedLogConfiguration#setAckQuorumSize()
The following examples show how to use
org.apache.distributedlog.DistributedLogConfiguration#setAckQuorumSize() .
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: DLFileSystem.java From distributedlog with Apache License 2.0 | 6 votes |
@Override public FSDataOutputStream create(Path path, FsPermission fsPermission, boolean overwrite, int bufferSize, short replication, long blockSize, Progressable progressable) throws IOException { // for overwrite, delete the existing file first. if (overwrite) { delete(path, false); } DistributedLogConfiguration confLocal = new DistributedLogConfiguration(); confLocal.addConfiguration(dlConf); confLocal.setEnsembleSize(replication); confLocal.setWriteQuorumSize(replication); confLocal.setAckQuorumSize(replication); confLocal.setMaxLogSegmentBytes(blockSize); return append(path, bufferSize, Optional.of(confLocal)); }
Example 2
Source File: DLNamespace.java From Elasticsearch with Apache License 2.0 | 5 votes |
public static synchronized DistributedLogNamespace getNamespace(Settings settings, String localNodeId) throws IllegalArgumentException, NullPointerException, IOException { if (logNamespace == null) { String logServiceUrl = settings.get(LOG_SERVICE_ENDPOINT); URI uri = URI.create(logServiceUrl); DistributedLogConfiguration conf = new DistributedLogConfiguration(); conf.setOutputBufferSize(settings.getAsInt(DL_MERGE_BUFFER_SIZE, 4 * 1024)); // immediate flush means write the user record and write a control record immediately, so that current client could get the record immediately // but this means write two record into bookkeeper // in our case we do not need that because replica replay it and not need read it immediately // if primary failed, if it recovering, it will write a control record into bk and could read it again conf.setImmediateFlushEnabled(false); // set write enabled == false, because lease already confirmed there is only one writer conf.setWriteLockEnabled(false); // this enables move lac after 10 seconds so that other node could see the latest records conf.setPeriodicFlushFrequencyMilliSeconds(2); // batch write to bookkeeper is disabled conf.setMinDelayBetweenImmediateFlushMs(0); conf.setZKSessionTimeoutSeconds(settings.getAsInt(ZK_SESSION_TIMEOUT, 10)); conf.setLockTimeout(DistributedLogConstants.LOCK_IMMEDIATE); conf.setLogSegmentRollingIntervalMinutes(0); // has to set to 0 to disable time based rolling policy and enable size based rolling policy conf.setMaxLogSegmentBytes(1 << 20 << settings.getAsInt(DL_SEGMENT_SIZE_MB, 8)); // set it to 256MB conf.setEnsembleSize(settings.getAsInt(DL_ENSEMBLE_SIZE, 3)); conf.setAckQuorumSize(settings.getAsInt(DL_ACK_QUORUM_SIZE, 2)); conf.setWriteQuorumSize(settings.getAsInt(DL_REPLICA_NUM, 3)); conf.setRowAwareEnsemblePlacementEnabled(false); conf.setReadAheadMaxRecords(100); conf.setReadAheadBatchSize(3); conf.setExplicitTruncationByApplication(true); // set it to true to disable auto truncate conf.setRetentionPeriodHours(1); // dl will purge truncated log segments after 1 hour logNamespace = DistributedLogNamespaceBuilder.newBuilder() .conf(conf) .uri(uri) .regionId(DistributedLogConstants.LOCAL_REGION_ID) .clientId(localNodeId) .build(); } return logNamespace; }