org.apache.hadoop.security.SaslPropertiesResolver Java Examples
The following examples show how to use
org.apache.hadoop.security.SaslPropertiesResolver.
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example #1
Source File: SaslDataTransferServer.java From hadoop with Apache License 2.0 | 6 votes |
/** * Receives SASL negotiation for general-purpose handshake. * * @param peer connection peer * @param underlyingOut connection output stream * @param underlyingIn connection input stream * @return new pair of streams, wrapped after SASL negotiation * @throws IOException for any error */ private IOStreamPair getSaslStreams(Peer peer, OutputStream underlyingOut, InputStream underlyingIn) throws IOException { if (peer.hasSecureChannel() || dnConf.getTrustedChannelResolver().isTrusted(getPeerAddress(peer))) { return new IOStreamPair(underlyingIn, underlyingOut); } SaslPropertiesResolver saslPropsResolver = dnConf.getSaslPropsResolver(); Map<String, String> saslProps = saslPropsResolver.getServerProperties( getPeerAddress(peer)); CallbackHandler callbackHandler = new SaslServerCallbackHandler( new PasswordFunction() { @Override public char[] apply(String userName) throws IOException { return buildServerPassword(userName); } }); return doSaslHandshake(underlyingOut, underlyingIn, saslProps, callbackHandler); }
Example #2
Source File: DataTransferSaslUtil.java From hadoop with Apache License 2.0 | 6 votes |
/** * Creates a SaslPropertiesResolver from the given configuration. This method * works by cloning the configuration, translating configuration properties * specific to DataTransferProtocol to what SaslPropertiesResolver expects, * and then delegating to SaslPropertiesResolver for initialization. This * method returns null if SASL protection has not been configured for * DataTransferProtocol. * * @param conf configuration to read * @return SaslPropertiesResolver for DataTransferProtocol, or null if not * configured */ public static SaslPropertiesResolver getSaslPropertiesResolver( Configuration conf) { String qops = conf.get(DFS_DATA_TRANSFER_PROTECTION_KEY); if (qops == null || qops.isEmpty()) { LOG.debug("DataTransferProtocol not using SaslPropertiesResolver, no " + "QOP found in configuration for {}", DFS_DATA_TRANSFER_PROTECTION_KEY); return null; } Configuration saslPropsResolverConf = new Configuration(conf); saslPropsResolverConf.set(HADOOP_RPC_PROTECTION, qops); Class<? extends SaslPropertiesResolver> resolverClass = conf.getClass( HADOOP_SECURITY_SASL_PROPS_RESOLVER_CLASS, SaslPropertiesResolver.class, SaslPropertiesResolver.class); resolverClass = conf.getClass(DFS_DATA_TRANSFER_SASL_PROPS_RESOLVER_CLASS_KEY, resolverClass, SaslPropertiesResolver.class); saslPropsResolverConf.setClass(HADOOP_SECURITY_SASL_PROPS_RESOLVER_CLASS, resolverClass, SaslPropertiesResolver.class); SaslPropertiesResolver resolver = SaslPropertiesResolver.getInstance( saslPropsResolverConf); LOG.debug("DataTransferProtocol using SaslPropertiesResolver, configured " + "QOP {} = {}, configured class {} = {}", DFS_DATA_TRANSFER_PROTECTION_KEY, qops, DFS_DATA_TRANSFER_SASL_PROPS_RESOLVER_CLASS_KEY, resolverClass); return resolver; }
Example #3
Source File: DataNode.java From hadoop with Apache License 2.0 | 6 votes |
/** * Checks if the DataNode has a secure configuration if security is enabled. * There are 2 possible configurations that are considered secure: * 1. The server has bound to privileged ports for RPC and HTTP via * SecureDataNodeStarter. * 2. The configuration enables SASL on DataTransferProtocol and HTTPS (no * plain HTTP) for the HTTP server. The SASL handshake guarantees * authentication of the RPC server before a client transmits a secret, such * as a block access token. Similarly, SSL guarantees authentication of the * HTTP server before a client transmits a secret, such as a delegation * token. * It is not possible to run with both privileged ports and SASL on * DataTransferProtocol. For backwards-compatibility, the connection logic * must check if the target port is a privileged port, and if so, skip the * SASL handshake. * * @param dnConf DNConf to check * @param conf Configuration to check * @param resources SecuredResources obtained for DataNode * @throws RuntimeException if security enabled, but configuration is insecure */ private static void checkSecureConfig(DNConf dnConf, Configuration conf, SecureResources resources) throws RuntimeException { if (!UserGroupInformation.isSecurityEnabled()) { return; } SaslPropertiesResolver saslPropsResolver = dnConf.getSaslPropsResolver(); if (resources != null && saslPropsResolver == null) { return; } if (dnConf.getIgnoreSecurePortsForTesting()) { return; } if (saslPropsResolver != null && DFSUtil.getHttpPolicy(conf) == HttpConfig.Policy.HTTPS_ONLY && resources == null) { return; } throw new RuntimeException("Cannot start secure DataNode without " + "configuring either privileged resources or SASL RPC data transfer " + "protection and SSL for HTTP. Using privileged resources in " + "combination with SASL RPC data transfer protection is not supported."); }
Example #4
Source File: SaslDataTransferServer.java From big-c with Apache License 2.0 | 6 votes |
/** * Receives SASL negotiation for general-purpose handshake. * * @param peer connection peer * @param underlyingOut connection output stream * @param underlyingIn connection input stream * @return new pair of streams, wrapped after SASL negotiation * @throws IOException for any error */ private IOStreamPair getSaslStreams(Peer peer, OutputStream underlyingOut, InputStream underlyingIn) throws IOException { if (peer.hasSecureChannel() || dnConf.getTrustedChannelResolver().isTrusted(getPeerAddress(peer))) { return new IOStreamPair(underlyingIn, underlyingOut); } SaslPropertiesResolver saslPropsResolver = dnConf.getSaslPropsResolver(); Map<String, String> saslProps = saslPropsResolver.getServerProperties( getPeerAddress(peer)); CallbackHandler callbackHandler = new SaslServerCallbackHandler( new PasswordFunction() { @Override public char[] apply(String userName) throws IOException { return buildServerPassword(userName); } }); return doSaslHandshake(underlyingOut, underlyingIn, saslProps, callbackHandler); }
Example #5
Source File: DataTransferSaslUtil.java From big-c with Apache License 2.0 | 6 votes |
/** * Creates a SaslPropertiesResolver from the given configuration. This method * works by cloning the configuration, translating configuration properties * specific to DataTransferProtocol to what SaslPropertiesResolver expects, * and then delegating to SaslPropertiesResolver for initialization. This * method returns null if SASL protection has not been configured for * DataTransferProtocol. * * @param conf configuration to read * @return SaslPropertiesResolver for DataTransferProtocol, or null if not * configured */ public static SaslPropertiesResolver getSaslPropertiesResolver( Configuration conf) { String qops = conf.get(DFS_DATA_TRANSFER_PROTECTION_KEY); if (qops == null || qops.isEmpty()) { LOG.debug("DataTransferProtocol not using SaslPropertiesResolver, no " + "QOP found in configuration for {}", DFS_DATA_TRANSFER_PROTECTION_KEY); return null; } Configuration saslPropsResolverConf = new Configuration(conf); saslPropsResolverConf.set(HADOOP_RPC_PROTECTION, qops); Class<? extends SaslPropertiesResolver> resolverClass = conf.getClass( HADOOP_SECURITY_SASL_PROPS_RESOLVER_CLASS, SaslPropertiesResolver.class, SaslPropertiesResolver.class); resolverClass = conf.getClass(DFS_DATA_TRANSFER_SASL_PROPS_RESOLVER_CLASS_KEY, resolverClass, SaslPropertiesResolver.class); saslPropsResolverConf.setClass(HADOOP_SECURITY_SASL_PROPS_RESOLVER_CLASS, resolverClass, SaslPropertiesResolver.class); SaslPropertiesResolver resolver = SaslPropertiesResolver.getInstance( saslPropsResolverConf); LOG.debug("DataTransferProtocol using SaslPropertiesResolver, configured " + "QOP {} = {}, configured class {} = {}", DFS_DATA_TRANSFER_PROTECTION_KEY, qops, DFS_DATA_TRANSFER_SASL_PROPS_RESOLVER_CLASS_KEY, resolverClass); return resolver; }
Example #6
Source File: DataNode.java From big-c with Apache License 2.0 | 6 votes |
/** * Checks if the DataNode has a secure configuration if security is enabled. * There are 2 possible configurations that are considered secure: * 1. The server has bound to privileged ports for RPC and HTTP via * SecureDataNodeStarter. * 2. The configuration enables SASL on DataTransferProtocol and HTTPS (no * plain HTTP) for the HTTP server. The SASL handshake guarantees * authentication of the RPC server before a client transmits a secret, such * as a block access token. Similarly, SSL guarantees authentication of the * HTTP server before a client transmits a secret, such as a delegation * token. * It is not possible to run with both privileged ports and SASL on * DataTransferProtocol. For backwards-compatibility, the connection logic * must check if the target port is a privileged port, and if so, skip the * SASL handshake. * * @param dnConf DNConf to check * @param conf Configuration to check * @param resources SecuredResources obtained for DataNode * @throws RuntimeException if security enabled, but configuration is insecure */ private static void checkSecureConfig(DNConf dnConf, Configuration conf, SecureResources resources) throws RuntimeException { if (!UserGroupInformation.isSecurityEnabled()) { return; } SaslPropertiesResolver saslPropsResolver = dnConf.getSaslPropsResolver(); if (resources != null && saslPropsResolver == null) { return; } if (dnConf.getIgnoreSecurePortsForTesting()) { return; } if (saslPropsResolver != null && DFSUtil.getHttpPolicy(conf) == HttpConfig.Policy.HTTPS_ONLY && resources == null) { return; } throw new RuntimeException("Cannot start secure DataNode without " + "configuring either privileged resources or SASL RPC data transfer " + "protection and SSL for HTTP. Using privileged resources in " + "combination with SASL RPC data transfer protection is not supported."); }
Example #7
Source File: Server.java From hadoop with Apache License 2.0 | 4 votes |
/** * Constructs a server listening on the named port and address. Parameters passed must * be of the named class. The <code>handlerCount</handlerCount> determines * the number of handler threads that will be used to process calls. * If queueSizePerHandler or numReaders are not -1 they will be used instead of parameters * from configuration. Otherwise the configuration will be picked up. * * If rpcRequestClass is null then the rpcRequestClass must have been * registered via {@link #registerProtocolEngine(RpcPayloadHeader.RpcKind, * Class, RPC.RpcInvoker)} * This parameter has been retained for compatibility with existing tests * and usage. */ @SuppressWarnings("unchecked") protected Server(String bindAddress, int port, Class<? extends Writable> rpcRequestClass, int handlerCount, int numReaders, int queueSizePerHandler, Configuration conf, String serverName, SecretManager<? extends TokenIdentifier> secretManager, String portRangeConfig) throws IOException { this.bindAddress = bindAddress; this.conf = conf; this.portRangeConfig = portRangeConfig; this.port = port; this.rpcRequestClass = rpcRequestClass; this.handlerCount = handlerCount; this.socketSendBufferSize = 0; this.maxDataLength = conf.getInt(CommonConfigurationKeys.IPC_MAXIMUM_DATA_LENGTH, CommonConfigurationKeys.IPC_MAXIMUM_DATA_LENGTH_DEFAULT); if (queueSizePerHandler != -1) { this.maxQueueSize = queueSizePerHandler; } else { this.maxQueueSize = handlerCount * conf.getInt( CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_KEY, CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_DEFAULT); } this.maxRespSize = conf.getInt( CommonConfigurationKeys.IPC_SERVER_RPC_MAX_RESPONSE_SIZE_KEY, CommonConfigurationKeys.IPC_SERVER_RPC_MAX_RESPONSE_SIZE_DEFAULT); if (numReaders != -1) { this.readThreads = numReaders; } else { this.readThreads = conf.getInt( CommonConfigurationKeys.IPC_SERVER_RPC_READ_THREADS_KEY, CommonConfigurationKeys.IPC_SERVER_RPC_READ_THREADS_DEFAULT); } this.readerPendingConnectionQueue = conf.getInt( CommonConfigurationKeys.IPC_SERVER_RPC_READ_CONNECTION_QUEUE_SIZE_KEY, CommonConfigurationKeys.IPC_SERVER_RPC_READ_CONNECTION_QUEUE_SIZE_DEFAULT); // Setup appropriate callqueue final String prefix = getQueueClassPrefix(); this.callQueue = new CallQueueManager<Call>(getQueueClass(prefix, conf), maxQueueSize, prefix, conf); this.secretManager = (SecretManager<TokenIdentifier>) secretManager; this.authorize = conf.getBoolean(CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION, false); // configure supported authentications this.enabledAuthMethods = getAuthMethods(secretManager, conf); this.negotiateResponse = buildNegotiateResponse(enabledAuthMethods); // Start the listener here and let it bind to the port listener = new Listener(); this.port = listener.getAddress().getPort(); connectionManager = new ConnectionManager(); this.rpcMetrics = RpcMetrics.create(this, conf); this.rpcDetailedMetrics = RpcDetailedMetrics.create(this.port); this.tcpNoDelay = conf.getBoolean( CommonConfigurationKeysPublic.IPC_SERVER_TCPNODELAY_KEY, CommonConfigurationKeysPublic.IPC_SERVER_TCPNODELAY_DEFAULT); // Create the responder here responder = new Responder(); if (secretManager != null || UserGroupInformation.isSecurityEnabled()) { SaslRpcServer.init(conf); saslPropsResolver = SaslPropertiesResolver.getInstance(conf); } this.exceptionsHandler.addTerseExceptions(StandbyException.class); }
Example #8
Source File: Server.java From big-c with Apache License 2.0 | 4 votes |
/** * Constructs a server listening on the named port and address. Parameters passed must * be of the named class. The <code>handlerCount</handlerCount> determines * the number of handler threads that will be used to process calls. * If queueSizePerHandler or numReaders are not -1 they will be used instead of parameters * from configuration. Otherwise the configuration will be picked up. * * If rpcRequestClass is null then the rpcRequestClass must have been * registered via {@link #registerProtocolEngine(RpcPayloadHeader.RpcKind, * Class, RPC.RpcInvoker)} * This parameter has been retained for compatibility with existing tests * and usage. */ @SuppressWarnings("unchecked") protected Server(String bindAddress, int port, Class<? extends Writable> rpcRequestClass, int handlerCount, int numReaders, int queueSizePerHandler, Configuration conf, String serverName, SecretManager<? extends TokenIdentifier> secretManager, String portRangeConfig) throws IOException { this.bindAddress = bindAddress; this.conf = conf; this.portRangeConfig = portRangeConfig; this.port = port; this.rpcRequestClass = rpcRequestClass; this.handlerCount = handlerCount; this.socketSendBufferSize = 0; this.maxDataLength = conf.getInt(CommonConfigurationKeys.IPC_MAXIMUM_DATA_LENGTH, CommonConfigurationKeys.IPC_MAXIMUM_DATA_LENGTH_DEFAULT); if (queueSizePerHandler != -1) { this.maxQueueSize = queueSizePerHandler; } else { this.maxQueueSize = handlerCount * conf.getInt( CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_KEY, CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_DEFAULT); } this.maxRespSize = conf.getInt( CommonConfigurationKeys.IPC_SERVER_RPC_MAX_RESPONSE_SIZE_KEY, CommonConfigurationKeys.IPC_SERVER_RPC_MAX_RESPONSE_SIZE_DEFAULT); if (numReaders != -1) { this.readThreads = numReaders; } else { this.readThreads = conf.getInt( CommonConfigurationKeys.IPC_SERVER_RPC_READ_THREADS_KEY, CommonConfigurationKeys.IPC_SERVER_RPC_READ_THREADS_DEFAULT); } this.readerPendingConnectionQueue = conf.getInt( CommonConfigurationKeys.IPC_SERVER_RPC_READ_CONNECTION_QUEUE_SIZE_KEY, CommonConfigurationKeys.IPC_SERVER_RPC_READ_CONNECTION_QUEUE_SIZE_DEFAULT); // Setup appropriate callqueue final String prefix = getQueueClassPrefix(); this.callQueue = new CallQueueManager<Call>(getQueueClass(prefix, conf), maxQueueSize, prefix, conf); this.secretManager = (SecretManager<TokenIdentifier>) secretManager; this.authorize = conf.getBoolean(CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION, false); // configure supported authentications this.enabledAuthMethods = getAuthMethods(secretManager, conf); this.negotiateResponse = buildNegotiateResponse(enabledAuthMethods); // Start the listener here and let it bind to the port listener = new Listener(); this.port = listener.getAddress().getPort(); connectionManager = new ConnectionManager(); this.rpcMetrics = RpcMetrics.create(this, conf); this.rpcDetailedMetrics = RpcDetailedMetrics.create(this.port); this.tcpNoDelay = conf.getBoolean( CommonConfigurationKeysPublic.IPC_SERVER_TCPNODELAY_KEY, CommonConfigurationKeysPublic.IPC_SERVER_TCPNODELAY_DEFAULT); // Create the responder here responder = new Responder(); if (secretManager != null || UserGroupInformation.isSecurityEnabled()) { SaslRpcServer.init(conf); saslPropsResolver = SaslPropertiesResolver.getInstance(conf); } this.exceptionsHandler.addTerseExceptions(StandbyException.class); }
Example #9
Source File: FanOutOneBlockAsyncDFSOutputSaslHelper.java From hbase with Apache License 2.0 | 4 votes |
static void trySaslNegotiate(Configuration conf, Channel channel, DatanodeInfo dnInfo, int timeoutMs, DFSClient client, Token<BlockTokenIdentifier> accessToken, Promise<Void> saslPromise) throws IOException { SaslDataTransferClient saslClient = client.getSaslDataTransferClient(); SaslPropertiesResolver saslPropsResolver = SASL_ADAPTOR.getSaslPropsResolver(saslClient); TrustedChannelResolver trustedChannelResolver = SASL_ADAPTOR.getTrustedChannelResolver(saslClient); AtomicBoolean fallbackToSimpleAuth = SASL_ADAPTOR.getFallbackToSimpleAuth(saslClient); InetAddress addr = ((InetSocketAddress) channel.remoteAddress()).getAddress(); if (trustedChannelResolver.isTrusted() || trustedChannelResolver.isTrusted(addr)) { saslPromise.trySuccess(null); return; } DataEncryptionKey encryptionKey = client.newDataEncryptionKey(); if (encryptionKey != null) { if (LOG.isDebugEnabled()) { LOG.debug( "SASL client doing encrypted handshake for addr = " + addr + ", datanodeId = " + dnInfo); } doSaslNegotiation(conf, channel, timeoutMs, getUserNameFromEncryptionKey(encryptionKey), encryptionKeyToPassword(encryptionKey.encryptionKey), createSaslPropertiesForEncryption(encryptionKey.encryptionAlgorithm), saslPromise, client); } else if (!UserGroupInformation.isSecurityEnabled()) { if (LOG.isDebugEnabled()) { LOG.debug("SASL client skipping handshake in unsecured configuration for addr = " + addr + ", datanodeId = " + dnInfo); } saslPromise.trySuccess(null); } else if (dnInfo.getXferPort() < 1024) { if (LOG.isDebugEnabled()) { LOG.debug("SASL client skipping handshake in secured configuration with " + "privileged port for addr = " + addr + ", datanodeId = " + dnInfo); } saslPromise.trySuccess(null); } else if (fallbackToSimpleAuth != null && fallbackToSimpleAuth.get()) { if (LOG.isDebugEnabled()) { LOG.debug("SASL client skipping handshake in secured configuration with " + "unsecured cluster for addr = " + addr + ", datanodeId = " + dnInfo); } saslPromise.trySuccess(null); } else if (saslPropsResolver != null) { if (LOG.isDebugEnabled()) { LOG.debug( "SASL client doing general handshake for addr = " + addr + ", datanodeId = " + dnInfo); } doSaslNegotiation(conf, channel, timeoutMs, buildUsername(accessToken), buildClientPassword(accessToken), saslPropsResolver.getClientProperties(addr), saslPromise, client); } else { // It's a secured cluster using non-privileged ports, but no SASL. The only way this can // happen is if the DataNode has ignore.secure.ports.for.testing configured, so this is a rare // edge case. if (LOG.isDebugEnabled()) { LOG.debug("SASL client skipping handshake in secured configuration with no SASL " + "protection configured for addr = " + addr + ", datanodeId = " + dnInfo); } saslPromise.trySuccess(null); } }
Example #10
Source File: SaslDataTransferClient.java From hadoop with Apache License 2.0 | 3 votes |
/** * Creates a new SaslDataTransferClient. * * @param conf the configuration * @param saslPropsResolver for determining properties of SASL negotiation * @param trustedChannelResolver for identifying trusted connections that do * not require SASL negotiation * @param fallbackToSimpleAuth checked on each attempt at general SASL * handshake, if true forces use of simple auth */ public SaslDataTransferClient(Configuration conf, SaslPropertiesResolver saslPropsResolver, TrustedChannelResolver trustedChannelResolver, AtomicBoolean fallbackToSimpleAuth) { this.conf = conf; this.fallbackToSimpleAuth = fallbackToSimpleAuth; this.saslPropsResolver = saslPropsResolver; this.trustedChannelResolver = trustedChannelResolver; }
Example #11
Source File: SaslDataTransferClient.java From big-c with Apache License 2.0 | 3 votes |
/** * Creates a new SaslDataTransferClient. * * @param conf the configuration * @param saslPropsResolver for determining properties of SASL negotiation * @param trustedChannelResolver for identifying trusted connections that do * not require SASL negotiation * @param fallbackToSimpleAuth checked on each attempt at general SASL * handshake, if true forces use of simple auth */ public SaslDataTransferClient(Configuration conf, SaslPropertiesResolver saslPropsResolver, TrustedChannelResolver trustedChannelResolver, AtomicBoolean fallbackToSimpleAuth) { this.conf = conf; this.fallbackToSimpleAuth = fallbackToSimpleAuth; this.saslPropsResolver = saslPropsResolver; this.trustedChannelResolver = trustedChannelResolver; }
Example #12
Source File: SaslDataTransferClient.java From hadoop with Apache License 2.0 | 2 votes |
/** * Creates a new SaslDataTransferClient. This constructor is used in cases * where it is not relevant to track if a secure client did a fallback to * simple auth. For intra-cluster connections between data nodes in the same * cluster, we can assume that all run under the same security configuration. * * @param conf the configuration * @param saslPropsResolver for determining properties of SASL negotiation * @param trustedChannelResolver for identifying trusted connections that do * not require SASL negotiation */ public SaslDataTransferClient(Configuration conf, SaslPropertiesResolver saslPropsResolver, TrustedChannelResolver trustedChannelResolver) { this(conf, saslPropsResolver, trustedChannelResolver, null); }
Example #13
Source File: DNConf.java From hadoop with Apache License 2.0 | 2 votes |
/** * Returns the SaslPropertiesResolver configured for use with * DataTransferProtocol, or null if not configured. * * @return SaslPropertiesResolver configured for use with DataTransferProtocol */ public SaslPropertiesResolver getSaslPropsResolver() { return saslPropsResolver; }
Example #14
Source File: SaslDataTransferClient.java From big-c with Apache License 2.0 | 2 votes |
/** * Creates a new SaslDataTransferClient. This constructor is used in cases * where it is not relevant to track if a secure client did a fallback to * simple auth. For intra-cluster connections between data nodes in the same * cluster, we can assume that all run under the same security configuration. * * @param conf the configuration * @param saslPropsResolver for determining properties of SASL negotiation * @param trustedChannelResolver for identifying trusted connections that do * not require SASL negotiation */ public SaslDataTransferClient(Configuration conf, SaslPropertiesResolver saslPropsResolver, TrustedChannelResolver trustedChannelResolver) { this(conf, saslPropsResolver, trustedChannelResolver, null); }
Example #15
Source File: DNConf.java From big-c with Apache License 2.0 | 2 votes |
/** * Returns the SaslPropertiesResolver configured for use with * DataTransferProtocol, or null if not configured. * * @return SaslPropertiesResolver configured for use with DataTransferProtocol */ public SaslPropertiesResolver getSaslPropsResolver() { return saslPropsResolver; }
Example #16
Source File: FanOutOneBlockAsyncDFSOutputSaslHelper.java From hbase with Apache License 2.0 | votes |
SaslPropertiesResolver getSaslPropsResolver(SaslDataTransferClient saslClient);