Question

I'd started datanode successfully before, but when I tried today it shows the following info. It sounds like I have not mkdir the /home/hadoop/appdata/hadoopdata directory,but I confirmed that the directory already exists in my computer. So what's the problem? Why I couldn't start the datanode normally?

Ex:I've tried to delete /home/hadoop/appdata/ and mkdir a new one, but it still doesn't work.

I've also deleted /home/hadoop/tmp/hadoop_tmp and mkdir a new one, it still doesn't work...

2014-03-04 09:30:30,106 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!

2014-03-04 09:30:30,349 INFO org.apache.hadoop.hdfs.server.common.Storage: Cannot access storage directory /home/hadoop/appdata/hadoopdata

2014-03-04 09:30:30,350 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /home/hadoop/appdata/hadoopdata does not exist

2014-03-04 09:30:30,453 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 

java.io.IOException: All specified directories are not accessible or do not exist.

    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:139)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:414)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
Was it helpful?

Solution

  1. Stop all hadoop services

  2. Delete dfs/namenode

  3. Delete dfs/datanode from both slaves and masters
  4. Check the premission of the Hadoop folder:

    sudo chmod –R 755 /usr/local/hadoop

  5. Restart Hadoop
  6. Check/verify the folder permission.

    sudo chmod –R 755 /home/hadoop/appdata

  7. If you still have the problem check the log files

OTHER TIPS

Try to formate your namenode **

use hadoop namenode -format 
or
hdfs namenode -format

** you will get clear picture what is not configure as expected.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top