Question

I'd like to know what is difference from the configuration point of view as well as theoretical point of view?

Do these two modes use different port numbers? or any other difference?

Was it helpful?

Solution

My 2 cents.

Single node setup (standalone setup)

By default, Hadoop is configured to run in a non-distributed or standalone mode, as a single Java process. There are no daemons running and everything runs in a single JVM instance. HDFS is not used.

You don't have to do anything as far as configuration is concerned, except the JAVA_HOME. Just download the tarball, unzip it, and you are good to go.

Pseudo-distributed mode

The Hadoop daemons run on a local machine, thus simulating a cluster on a small scale. Different Hadoop daemons run in different JVM instances, but on a single machine. HDFS is used instead of local FS.

As far as pseudo-distributed setup is concerned, you need to set at least following 2 properties along with JAVA_HOME:

  1. fs.default.name in core-site.xml.

  2. mapred.job.tracker in mapred-site.xml.

You could have multiple datanodes and tasktrackers, but that doesn't make much sense on a single machine.

HTH

OTHER TIPS

A single node setup is one where you have (presumably) one datanode and one tasktracker on a single machine.

A pseudo-distributed setup is where you have multiple datanodes and (presumably) tasktrackers on a single machine. So you have multiple instances of a datanode service running on a single machine to emulate a multi-node cluster.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top