Question

While working on Hadoop Implementation in Pseudo-Distributed Operation, I found following exception of JAVA_HOME variable not setting, but When I tried to echo it, it was set.

Variable is preset in conf/hadoop-env.sh (edited export JAVA_HOME=/usr/lib/jvm/java-6-sun), bash.bashrc.

vardan@vardan-HP-G62-Notebook-PC:~/hadoop-0.20.203.0$ echo $JAVA_HOME
/usr/lib/jvm/java-6-sun
vardan@vardan-HP-G62-Notebook-PC:~/hadoop-0.20.203.0$ bin/start-all.sh
starting namenode, logging to /home/vardan/hadoop-0.20.203.0/bin/../logs/hadoop-vardan-namenode-vardan-HP-G62-Notebook-PC.out
localhost: starting datanode, logging to /home/vardan/hadoop-0.20.203.0/bin/../logs/hadoop-vardan-datanode-vardan-HP-G62-Notebook-PC.out
localhost: Error: JAVA_HOME is not set. 
localhost: starting secondarynamenode, logging to /home/vardan/hadoop-0.20.203.0/bin/../logs/hadoop-vardan-secondarynamenode-vardan-HP-G62-Notebook-PC.out
localhost: Error: JAVA_HOME is not set. 
starting jobtracker, logging to /home/vardan/hadoop-0.20.203.0/bin/../logs/hadoop-vardan-jobtracker-vardan-HP-G62-Notebook-PC.out 
localhost: starting tasktracker, logging to /home/vardan/hadoop-0.20.203.0/bin/../logs/hadoop-vardan-tasktracker-vardan-HP-G62-Notebook-PC.out
localhost: Error: JAVA_HOME is not set.
Was it helpful?

Solution

Check if bin/start-all.sh doesn't override JAVA_HOME

Maybe put echo $JAVA_HOME inside that script straight before execution of those binaries?

OTHER TIPS

I simply added to the ./conf/hadoop-env.sh this line:

# The java implementation to use.  Required.
export JAVA_HOME=/usr/java/latest

and it helped

The JAVA_HOME variable you set with set JAVA_HOME= is relevant only for the current shell. Given that you are starting a new shell when executing bin/start-all.sh you need to "export" the environmental variable to make it available in the global scope:

export JAVA_HOME=/usr/lib/jvm/java-6-sun

Installing java 1.6.x

  1. Download "jdk-6u32-linux-i586.bin"
  2. sh jdk-6u32-linux-i586.bin

  3. mv /etc/alternatives/java /etc/alternatives/java_bak mv /etc/alternatives/javac /etc/alternatives/javac_bak

  4. create link ln -s /opt/jdk1.6.0_32/bin/java /etc/alternatives/java ln -s /opt/jdk1.6.0_32/bin/javac /etc/alternatives/javac

5. java -version

-----------you must see this -------------------------------- java version "1.6.0_32" Java(TM) SE Runtime Environment (build 1.6.0_32-b05)

Java HotSpot(TM) Client VM (build 20.7-b02, mixed mode, sharing)

You can try to write in your .bash_profile file "JAVA_HOME=/usr/lib/jvm/java-6-sun", when you login using a console, either physically at the machine or using ssh, .bash_profile is executed., .bashrc is open when you execute /bin/bash or another terminal

you can also try to write it inside the start-all.sh

make sure that JAVA_HOME has the correct path

same problem on Ubuntu Precise + CDH4...

long story short, CDH4 uses bigtop and the simplest way to set JAVA_HOME is to edit the /etc/default/bigtop-utils file such as:

export JAVA_HOME=/usr/lib/jvm/jdk1.6.0_43/jre/

Note: I couldn't find any hadoop-env.sh file following a proper CDH4 install.

The problem is that the scripts that initializes dfs and mapreduce daemons do not have JAVA_HOME defined in its environment.

Posibly, when executing HDFS and MapReduce, its execution environment is the one specified by the script $HADOOP_HOME/conf/hadoop-env.sh. Consequently, it would be enough to define JAVA_HOME at $HADOOP_HOME/conf/hadoop-env.sh :

export JAVA_HOME=jdk-home-path

Otherwise, when this is not enough, the problem may be that the configuration environment used (hadoop-env.sh) is not the one we are expecting; hadoop is choosing another one or the default value if none found. The fastest solution will be defining the conf directory where the hadoop-env.sh script is placed. Setting the HADOOP_CONF_DIR environment variable will be enough:

export HADOOP_CONF_DIR=hadoop-home-path/conf

You can add in your .bashrc file:

export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")

and it will dynamically change when you update your packages and get used by Hadoop.

Maybe you need to check $JAVA_HOME configuration as below:
1. for hadoop-1.x: hadoop-home-path/conf/hadoop-env.sh
2. for hadoop-2.x: hadoop-home-path/etc/hadoop/hadoop-env.sh

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top