Question

When I was running a spark program in the cluster, I got this error in log:

java.io.IOException: Cannot run program "java" (in directory "/cloud/packages/spark-0.9.0-incubating-bin-hadoop1/work/app-20140424114752-0000/0"): java.io.IOException: error=2, No such file or directory
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
    at org.apache.spark.deploy.worker.ExecutorRunner.fetchAndRunExecutor(ExecutorRunner.scala:129)
    at org.apache.spark.deploy.worker.ExecutorRunner$$anon$1.run(ExecutorRunner.scala:59)
Caused by: java.io.IOException: java.io.IOException: error=2, No such file or directory
    at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
    at java.lang.ProcessImpl.start(ProcessImpl.java:65)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
    ... 2 more

I have set JAVA_HOME(/cloud/packages/jdk1.6.0_38) and SPARK_HOME(/cloud/packages/spark-0.9.0-incubating-bin-hadoop1).

What's the causes of this exception? How to fixed it?

Was it helpful?

Solution

I encountered the same issue on Ubuntu 12.04, and fixed it by adding JAVA_HOME in /etc/environment.

OTHER TIPS

Check your java version.

java -version

If java is installed properly it displays Java version.

If not Install java(Ubundu)

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java7-installer

update

Please check echo $JAVA_HOME.

Else set java home in .bashrc

export JAVA_HOME=/cloud/packages/jdk1.6.0_38 
export PATH=$PATH:$JAVA_HOME/bin

In a shell, you are accustomed to being able to run java and letting the shell consult the PATH to go find where it is. Here you are invoking a command directly in the OS. It can't find java as it says. You can use a shell like bash to invoke the command for you, or give a full path to java.

Why though? why not run Java code inside the worker?

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top