Domanda

I have a input file looks like this, which has already been uploaded to HDFS /tmp/input (delimited in ^A, which is a nonprinting character, this is the view in VI)

A^A10
A^A7
A^A10
A^A5
A^A10
A^A8
B^A1
A^A9
B^A1
A^A9
B^A1
A^A9
B^A1    
A^A9
B^A1
A^A9
B^A1
A^A9

I wrote the mapper looks like this:

import sys
for line in sys.stdin:
    name, score = line.strip().split(chr(1))
    print '\t'.join([name, str(int(score)+1)])

reducer looks like this (similar to):

import sys
from datetime import datetime

def calc(inputList):
    return min(inputList)

def main():
    current_key = None
    value_list = []
    key = None
    value = None
    result = None
    for line in sys.stdin:
        try:
            line = line.strip()
            key, value = line.split('\t', 1)

            try:
                value = eval(value)
            except:
                continue
            if current_key == key:
                value_list.append(value)
            else:
                if current_key:
                    try:
                        result = str(calc(value_list))
                    except:
                        pass
                    print '%s\t%s' % (current_key, result )
                value_list = [value]
                current_key = key
        except:
        pass
    print '%s\t%s' % (current_key, str(calc(value_list)))

if __name__ == '__main__':
    main()

I tested the mapper and reducer in shell and it works for me:

$ cat input | python mapper.py | sort -t$'\t' -k1 | python reducer.py 
A   6
B   2

But I failed implementing it using hadoop streaming:

/usr/bin/hadoop 
jar /opt/cloudera/parcels/CDH-4.3.0-1.cdh4.3.0.p0.22/lib/hadoop-0.20-mapreduce/contrib/streaming/hadoop-streaming-2.0.0-mr1-cdh4.3.0.jar 
-file mapper.py 
-mapper mapper.py  
-file reducer.py 
-reducer reducer.py 
-input /tmp/input 
-output /tmp/output

The error output looks like this:

13/10/07 15:59:02 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/10/07 15:59:02 INFO mapred.FileInputFormat: Total input paths to process : 1
13/10/07 15:59:02 INFO streaming.StreamJob: getLocalDirs(): [/tmp/hadoop-a59347/mapred/local]
13/10/07 15:59:02 INFO streaming.StreamJob: Running job: job_201309301959_0089
13/10/07 15:59:02 INFO streaming.StreamJob: To kill this job, run:
13/10/07 15:59:02 INFO streaming.StreamJob: UNDEF/bin/hadoop job  -Dmapred.job.tracker=url1:8021 -kill job_201309301959_0089
13/10/07 15:59:02 INFO streaming.StreamJob: Tracking URL: http://url1:50030/jobdetails.jsp?jobid=job_201309301959_0089
13/10/07 15:59:03 INFO streaming.StreamJob:  map 0%  reduce 0%
13/10/07 15:59:10 INFO streaming.StreamJob:  map 50%  reduce 0%
13/10/07 16:00:10 INFO streaming.StreamJob:  map 100%  reduce 0%
13/10/07 16:00:26 INFO streaming.StreamJob:  map 100%  reduce 1%
13/10/07 16:00:32 INFO streaming.StreamJob:  map 100%  reduce 2%
13/10/07 16:00:37 INFO streaming.StreamJob:  map 100%  reduce 100%
13/10/07 16:00:37 INFO streaming.StreamJob: To kill this job, run:
13/10/07 16:00:37 INFO streaming.StreamJob: UNDEF/bin/hadoop job  -Dmapred.job.tracker=url1:8021 -kill job_201309301959_0089
13/10/07 16:00:37 INFO streaming.StreamJob: Tracking URL: http://url1:50030/jobdetails.jsp?jobid=job_201309301959_0089
13/10/07 16:00:37 ERROR streaming.StreamJob: Job not successful. Error: NA
13/10/07 16:00:37 INFO streaming.StreamJob: killJob...
Streaming Command Failed!

Any idea where I did wrong?

È stato utile?

Soluzione

The Hadoop framework does not know how to run your mapper and reducer. There's two possible fixes:

FIX 1: explicitly call python.

-mapper "python mapper.py" -reducer "python reducer.py"

FIX 2: tell Hadoop where to find the python interpreter. To do this, you need to explicitly tell it where to find it, in the top line of your *.py files. For example:

#!/usr/bin/env python

Note, however, that python isn't always in /usr/bin (see comment by copumpkin below).

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top