org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException

来源:互联网 发布:张建伟 人工智能 编辑:程序博客网 时间:2024/06/10 04:00

org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException: Not replicated yet:/nnThroughputBenchmark/addblock/AddblockBenchDir0/AddblockBench0
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1350)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:400)
    at org.apache.hadoop.hdfs.NNThroughputBenchmark.addBlocks(NNThroughputBenchmark.java:1228)
    at org.apache.hadoop.hdfs.NNThroughputBenchmark.testAddBlcok(NNThroughputBenchmark.java:1216)
    at org.apache.hadoop.hdfs.NNThroughputBenchmark.main(NNThroughputBenchmark.java:1247)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
    at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:141)
    at org.apache.hadoop.test.AllTestDriver.main(AllTestDriver.java:90)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:165)
    at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68)

     NotReplicatedYetException这个异常是在写文件过程中新分配一个数据块getAdditionalBlock()时调用checkFileProgress(pendingFile, false)检查文件的倒数第二个数据块的副本数是否达到系统安全副本数要求,不达到则抛NotReplicatedYetException。

     checkFileProgress(pendingFile, true)还会在写完文件关闭时completeFileInternal()调用,对文件的所有数据块进行检查,若有文件有数据块的副本数少于安全要求则返回CompleteFileStatus.STILL_WAITING状态;

     下面主要对写文件过程新分配数据块的流程进行分析:

     1,Namenode在命名空间新建一个INode节点;

     2,为新数据块选DN位置location返回Client,检查文件的倒数第二个数据块副本数是否达到安全要求,否则抛NotReplicatedYetException(当然第一个数据块跳过这样的检查);

     3,Client写pipeline;

     4,pipeline上DN的PacketResponder线程等待下一级的ack,在成功接受完一个数据块后finalizeBlock,调用addBlock添加到volumeMap,之后在offerService()向NN上报告知已经存好到这个数据块了;

     5,NN处理blockReceived();

     6,有新数据块则继续步骤2;


     所以测试时的代码片段如下:

NameNode.format(conf);  Namenode.createNamenode();    for(int idx=0; idx < dnNum; idx++) {    datanodes[idx] = new Datanode();    datanodes[idx].register();    datanodes[idx].sendHeartbeat();  }    nameNode.setSafeMode(FSConstants.SafeModeAction.SAFEMODE_LEAVE);    nameNode.create(fileName, FsPermission.getDefault(),            clientName, true, repl, BLOCK_SIZE);    LocatedBlock loc = nameNode.addBlock(fileName, clientName);    datanodes[jdx].addBlock(loc.getBlock());    nameNode.blockReceived(            datanodes[jdx].dnRegistration,            new Block[] {loc.getBlock()},            new String[] {""});    nameNode.complete(fileName, clientName);