Java Code Examples for org.apache.hadoop.hdfs.server.namenode.ha.HATestUtil#getLogicalHostname()
The following examples show how to use
org.apache.hadoop.hdfs.server.namenode.ha.HATestUtil#getLogicalHostname() .
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: TestDFSClientFailover.java From hadoop with Apache License 2.0 | 6 votes |
/** * Make sure that client failover works when an active NN dies and the standby * takes over. */ @Test public void testDfsClientFailover() throws IOException, URISyntaxException { FileSystem fs = HATestUtil.configureFailoverFs(cluster, conf); DFSTestUtil.createFile(fs, TEST_FILE, FILE_LENGTH_TO_VERIFY, (short)1, 1L); assertEquals(fs.getFileStatus(TEST_FILE).getLen(), FILE_LENGTH_TO_VERIFY); cluster.shutdownNameNode(0); cluster.transitionToActive(1); assertEquals(fs.getFileStatus(TEST_FILE).getLen(), FILE_LENGTH_TO_VERIFY); // Check that it functions even if the URL becomes canonicalized // to include a port number. Path withPort = new Path("hdfs://" + HATestUtil.getLogicalHostname(cluster) + ":" + NameNode.DEFAULT_PORT + "/" + TEST_FILE.toUri().getPath()); FileSystem fs2 = withPort.getFileSystem(fs.getConf()); assertTrue(fs2.exists(withPort)); fs.close(); }
Example 2
Source File: TestDFSClientFailover.java From hadoop with Apache License 2.0 | 6 votes |
/** * Test to verify legacy proxy providers are correctly wrapped. */ @Test public void testWrappedFailoverProxyProvider() throws Exception { // setup the config with the dummy provider class Configuration config = new HdfsConfiguration(conf); String logicalName = HATestUtil.getLogicalHostname(cluster); HATestUtil.setFailoverConfigurations(cluster, config, logicalName); config.set(DFS_CLIENT_FAILOVER_PROXY_PROVIDER_KEY_PREFIX + "." + logicalName, DummyLegacyFailoverProxyProvider.class.getName()); Path p = new Path("hdfs://" + logicalName + "/"); // not to use IP address for token service SecurityUtil.setTokenServiceUseIp(false); // Logical URI should be used. assertTrue("Legacy proxy providers should use logical URI.", HAUtil.useLogicalUri(config, p.toUri())); }
Example 3
Source File: TestDFSClientFailover.java From big-c with Apache License 2.0 | 6 votes |
/** * Make sure that client failover works when an active NN dies and the standby * takes over. */ @Test public void testDfsClientFailover() throws IOException, URISyntaxException { FileSystem fs = HATestUtil.configureFailoverFs(cluster, conf); DFSTestUtil.createFile(fs, TEST_FILE, FILE_LENGTH_TO_VERIFY, (short)1, 1L); assertEquals(fs.getFileStatus(TEST_FILE).getLen(), FILE_LENGTH_TO_VERIFY); cluster.shutdownNameNode(0); cluster.transitionToActive(1); assertEquals(fs.getFileStatus(TEST_FILE).getLen(), FILE_LENGTH_TO_VERIFY); // Check that it functions even if the URL becomes canonicalized // to include a port number. Path withPort = new Path("hdfs://" + HATestUtil.getLogicalHostname(cluster) + ":" + NameNode.DEFAULT_PORT + "/" + TEST_FILE.toUri().getPath()); FileSystem fs2 = withPort.getFileSystem(fs.getConf()); assertTrue(fs2.exists(withPort)); fs.close(); }
Example 4
Source File: TestDFSClientFailover.java From big-c with Apache License 2.0 | 6 votes |
/** * Test to verify legacy proxy providers are correctly wrapped. */ @Test public void testWrappedFailoverProxyProvider() throws Exception { // setup the config with the dummy provider class Configuration config = new HdfsConfiguration(conf); String logicalName = HATestUtil.getLogicalHostname(cluster); HATestUtil.setFailoverConfigurations(cluster, config, logicalName); config.set(DFS_CLIENT_FAILOVER_PROXY_PROVIDER_KEY_PREFIX + "." + logicalName, DummyLegacyFailoverProxyProvider.class.getName()); Path p = new Path("hdfs://" + logicalName + "/"); // not to use IP address for token service SecurityUtil.setTokenServiceUseIp(false); // Logical URI should be used. assertTrue("Legacy proxy providers should use logical URI.", HAUtil.useLogicalUri(config, p.toUri())); }
Example 5
Source File: TestDFSClientFailover.java From hadoop with Apache License 2.0 | 5 votes |
/** * Regression test for HDFS-2683. */ @Test public void testLogicalUriShouldNotHavePorts() { Configuration config = new HdfsConfiguration(conf); String logicalName = HATestUtil.getLogicalHostname(cluster); HATestUtil.setFailoverConfigurations(cluster, config, logicalName); Path p = new Path("hdfs://" + logicalName + ":12345/"); try { p.getFileSystem(config).exists(p); fail("Did not fail with fake FS"); } catch (IOException ioe) { GenericTestUtils.assertExceptionContains( "does not use port information", ioe); } }
Example 6
Source File: TestDFSClientFailover.java From big-c with Apache License 2.0 | 5 votes |
/** * Regression test for HDFS-2683. */ @Test public void testLogicalUriShouldNotHavePorts() { Configuration config = new HdfsConfiguration(conf); String logicalName = HATestUtil.getLogicalHostname(cluster); HATestUtil.setFailoverConfigurations(cluster, config, logicalName); Path p = new Path("hdfs://" + logicalName + ":12345/"); try { p.getFileSystem(config).exists(p); fail("Did not fail with fake FS"); } catch (IOException ioe) { GenericTestUtils.assertExceptionContains( "does not use port information", ioe); } }