Java Code Examples for org.apache.hadoop.hdfs.protocol.Block#getBlockName()
The following examples show how to use
org.apache.hadoop.hdfs.protocol.Block#getBlockName() .
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: BlockPoolSlice.java From lucene-solr with Apache License 2.0 | 5 votes |
/** * Temporary files. They get moved to the finalized block directory when * the block is finalized. */ File createTmpFile(Block b) throws IOException { File f = new File(tmpDir, b.getBlockName()); File tmpFile = DatanodeUtil.createFileWithExistsCheck( volume, b, f, fileIoProvider); // If any exception during creation, its expected that counter will not be // incremented, So no need to decrement incrNumBlocks(); return tmpFile; }
Example 2
Source File: BlockPoolSlice.java From lucene-solr with Apache License 2.0 | 5 votes |
/** * RBW files. They get moved to the finalized block directory when * the block is finalized. */ File createRbwFile(Block b) throws IOException { File f = new File(rbwDir, b.getBlockName()); File rbwFile = DatanodeUtil.createFileWithExistsCheck( volume, b, f, fileIoProvider); // If any exception during creation, its expected that counter will not be // incremented, So no need to decrement incrNumBlocks(); return rbwFile; }
Example 3
Source File: FastCopy.java From RDFS with Apache License 2.0 | 5 votes |
/** * Updates the status of a block. If the block is in the * {@link FastCopy#blockStatusMap} we are still waiting for the block to * reach the desired replication level. * * @param b * the block whose status needs to be updated * @param isError * whether or not the block had an error */ private void updateBlockStatus(Block b, boolean isError) { synchronized (blockStatusMap) { BlockStatus bStatus = blockStatusMap.get(b); if (bStatus == null) { return; } if (isError) { bStatus.addBadReplica(); if (bStatus.isBadBlock()) { blockStatusMap.remove(b); blkRpcException = new IOException( "All replicas are bad for block : " + b.getBlockName()); } } else { bStatus.addGoodReplica(); // We are removing the block from the blockStatusMap, this indicates // that the block has reached the desired replication so now we // update the fileStatusMap. Note that this will happen only once // for each block. if (bStatus.isGoodBlock()) { blockStatusMap.remove(b); updateFileStatus(); } } } }
Example 4
Source File: BlockPoolSlice.java From hadoop with Apache License 2.0 | 4 votes |
/** * Temporary files. They get moved to the finalized block directory when * the block is finalized. */ File createTmpFile(Block b) throws IOException { File f = new File(tmpDir, b.getBlockName()); return DatanodeUtil.createTmpFile(b, f); }
Example 5
Source File: BlockPoolSlice.java From hadoop with Apache License 2.0 | 4 votes |
/** * RBW files. They get moved to the finalized block directory when * the block is finalized. */ File createRbwFile(Block b) throws IOException { File f = new File(rbwDir, b.getBlockName()); return DatanodeUtil.createTmpFile(b, f); }
Example 6
Source File: BlockPoolSlice.java From big-c with Apache License 2.0 | 4 votes |
/** * Temporary files. They get moved to the finalized block directory when * the block is finalized. */ File createTmpFile(Block b) throws IOException { File f = new File(tmpDir, b.getBlockName()); return DatanodeUtil.createTmpFile(b, f); }
Example 7
Source File: BlockPoolSlice.java From big-c with Apache License 2.0 | 4 votes |
/** * RBW files. They get moved to the finalized block directory when * the block is finalized. */ File createRbwFile(Block b) throws IOException { File f = new File(rbwDir, b.getBlockName()); return DatanodeUtil.createTmpFile(b, f); }
Example 8
Source File: FSDataset.java From RDFS with Apache License 2.0 | 4 votes |
/** * Temporary files. They get moved to the finalized block directory when * the block is finalized. */ File createTmpFile(Block b) throws IOException { File f = new File(tmpDir, b.getBlockName()); return FSDataset.createTmpFile(b, f); }
Example 9
Source File: FSDataset.java From RDFS with Apache License 2.0 | 4 votes |
File createDetachFile(Block b) throws IOException { File f = new File(detachDir, b.getBlockName()); return FSDataset.createTmpFile(b, f); }
Example 10
Source File: FSDataset.java From RDFS with Apache License 2.0 | 4 votes |
File getTmpFile(Block b) throws IOException { File f = new File(tmpDir, b.getBlockName()); return f; }
Example 11
Source File: FSDataset.java From RDFS with Apache License 2.0 | 4 votes |
/** * RBW files. They get moved to the finalized block directory when * the block is finalized. */ File createRbwFile(Block b) throws IOException { File f = new File(rbwDir, b.getBlockName()); return FSDataset.createTmpFile(b, f); }
Example 12
Source File: FSDataset.java From RDFS with Apache License 2.0 | 4 votes |
private File addBlock(int namespaceId, Block b, File src, boolean createOk, boolean resetIdx) throws IOException { if (numBlocks < maxBlocksPerDir) { File dest = new File(dir, b.getBlockName()); File metaData = getMetaFile( src, b ); File newmeta = getMetaFile(dest, b); if ( ! metaData.renameTo( newmeta ) || ! src.renameTo( dest ) ) { throw new IOException( "could not move files for " + b + " from tmp to " + dest.getAbsolutePath() ); } if (DataNode.LOG.isDebugEnabled()) { DataNode.LOG.debug("addBlock: Moved " + metaData + " to " + newmeta); DataNode.LOG.debug("addBlock: Moved " + src + " to " + dest); } numBlocks += 1; return dest; } FSDir[] children = this.getChildren(); if (lastChildIdx < 0 && resetIdx) { //reset so that all children will be checked lastChildIdx = random.nextInt(children.length); } if (lastChildIdx >= 0 && children != null) { //Check if any child-tree has room for a block. for (int i=0; i < children.length; i++) { int idx = (lastChildIdx + i)%children.length; File file = children[idx].addBlock(namespaceId, b, src, false, resetIdx); if (file != null) { lastChildIdx = idx; return file; } } lastChildIdx = -1; } if (!createOk) { return null; } if (children == null || children.length == 0) { // make sure children is immutable once initialized. FSDir[] newChildren = new FSDir[maxBlocksPerDir]; for (int idx = 0; idx < maxBlocksPerDir; idx++) { newChildren[idx] = new FSDir(namespaceId, new File(dir, DataStorage.BLOCK_SUBDIR_PREFIX + idx)); } childrenDirs = children = newChildren; } //now pick a child randomly for creating a new set of subdirs. lastChildIdx = random.nextInt(children.length); return children[ lastChildIdx ].addBlock(namespaceId, b, src, true, false); }
Example 13
Source File: FSDataset.java From RDFS with Apache License 2.0 | 4 votes |
/** * Finds a volume for the dstBlock and adds the new block to the FSDataset * data structures to indicate we are going to start writing to the block. * * @param srcFileSystem * the file system for srcBlockFile * @param srcBlockFile * the block file for the srcBlock * @param srcNamespaceId * the namespace id for source block * @param srcBlock * the source block that needs to be copied over * @param dstNamespaceId * the namespace id for destination block * @param dstBlock * the new destination block that needs to be created for copying * @return returns whether or not a hardlink is possible, if hardlink was not * requested this is always false. * @throws IOException */ private boolean copyBlockLocalAdd(String srcFileSystem, File srcBlockFile, int srcNamespaceId, Block srcBlock, int dstNamespaceId, Block dstBlock) throws IOException { boolean hardlink = true; File dstBlockFile = null; lock.writeLock().lock(); try { if (isValidBlock(dstNamespaceId, dstBlock, false) || volumeMap.getOngoingCreates(dstNamespaceId, dstBlock) != null) { throw new BlockAlreadyExistsException("Block " + dstBlock + " already exists"); } if (srcBlockFile == null || !srcBlockFile.exists()) { throw new IOException("Block " + srcBlock.getBlockName() + " is not valid or does not have a valid block file"); } FSVolume dstVol = null; if (shouldHardLinkBlockCopy) { dstVol = findVolumeForHardLink( srcFileSystem, srcNamespaceId, srcBlock, srcBlockFile); } // Could not find a volume for a hard link, fall back to regular file // copy. if (dstVol == null) { dstVol = volumes.getNextVolume(srcBlock.getNumBytes()); hardlink = false; } dstBlockFile = addToOngoingCreates(dstNamespaceId, dstBlock, dstVol); volumeMap.add(dstNamespaceId, dstBlock, new DatanodeBlockInfo(dstVol, dstBlockFile, DatanodeBlockInfo.UNFINALIZED)); } finally { lock.writeLock().unlock(); } if (dstBlockFile == null) { throw new IOException("Could not allocate block file for : " + dstBlock.getBlockName()); } return hardlink; }
Example 14
Source File: FSDataset.java From hadoop-gpu with Apache License 2.0 | 4 votes |
private File addBlock(Block b, File src, boolean createOk, boolean resetIdx) throws IOException { if (numBlocks < maxBlocksPerDir) { File dest = new File(dir, b.getBlockName()); File metaData = getMetaFile( src, b ); File newmeta = getMetaFile(dest, b); if ( ! metaData.renameTo( newmeta ) || ! src.renameTo( dest ) ) { throw new IOException( "could not move files for " + b + " from tmp to " + dest.getAbsolutePath() ); } if (DataNode.LOG.isDebugEnabled()) { DataNode.LOG.debug("addBlock: Moved " + metaData + " to " + newmeta); DataNode.LOG.debug("addBlock: Moved " + src + " to " + dest); } numBlocks += 1; return dest; } if (lastChildIdx < 0 && resetIdx) { //reset so that all children will be checked lastChildIdx = random.nextInt(children.length); } if (lastChildIdx >= 0 && children != null) { //Check if any child-tree has room for a block. for (int i=0; i < children.length; i++) { int idx = (lastChildIdx + i)%children.length; File file = children[idx].addBlock(b, src, false, resetIdx); if (file != null) { lastChildIdx = idx; return file; } } lastChildIdx = -1; } if (!createOk) { return null; } if (children == null || children.length == 0) { children = new FSDir[maxBlocksPerDir]; for (int idx = 0; idx < maxBlocksPerDir; idx++) { children[idx] = new FSDir(new File(dir, DataStorage.BLOCK_SUBDIR_PREFIX+idx)); } } //now pick a child randomly for creating a new set of subdirs. lastChildIdx = random.nextInt(children.length); return children[ lastChildIdx ].addBlock(b, src, true, false); }
Example 15
Source File: FSDataset.java From hadoop-gpu with Apache License 2.0 | 4 votes |
/** * Temporary files. They get moved to the real block directory either when * the block is finalized or the datanode restarts. */ File createTmpFile(Block b) throws IOException { File f = new File(tmpDir, b.getBlockName()); return createTmpFile(b, f); }
Example 16
Source File: FSDataset.java From hadoop-gpu with Apache License 2.0 | 4 votes |
/** * Returns the name of the temporary file for this block. */ File getTmpFile(Block b) throws IOException { File f = new File(tmpDir, b.getBlockName()); return f; }