Question

Suppose I have a plain text file with the following data:

DataSetOne <br />
content <br />
content <br />
content <br />


DataSetTwo <br />
content <br />
content <br />
content <br />
content <br />

...and so on...

What I want to to is: count how many contents in each data set. For example the result should be

<DataSetOne, 3>, <DataSetTwo, 4>

I am a beginer to hadoop, I wonder if there is a way to map a chunk of data as a whole to a node. for example, set all DataSetOne to node 1 and all DataSetTwo to node 2.

Does anyone can give me an idea how to archive this?

Was it helpful?

Solution

First of all your datasets are split for multiple maps if they are in seperate files or if they exceed the configured blocksize. So if you have one dataset of 128MB and your chunksize is 64mb hadoop will 2-block this file and setup 2 mappers for each.
This is like the wordcount example in the hadoop tutorials. Like David says you'll need to map the key/value pairs into HDFS and then reduce on them. I would implement that like this:

// field in the mapper class
int groupId = 0;

@Override
protected void map(K key, V value, Context context) throws IOException,
        InterruptedException {
    if(key != groupId)
        groupId = key;
    context.write(groupId, value);
}

@Override
protected void reduce(K key, Iterable<V> values,
        Context context)
        throws IOException, InterruptedException {
    int size = 0;
    for(Value v : values){
        size++;
    }
    context.write(key, size);
}

Like David said aswell you could use combiner. Combiners are simple reducers and are used to save ressources between the map and reduce phase. They can be set in the configuration.

OTHER TIPS

I think the simple way will be to implement the logic in the mapper, where you will remember what is a current dataSet and emit pairs like this:

(DataSetOne, content)
(DataSetOne, content)
(DataSetOne, content)

(DataSetTwo, content)
(DataSetTwo, content)

And then you will countgroups in the reduce stage.

If performance will became an issue I would suggest to consider combiner.

You can extend the FileInputFormat class and implement the RecordReader interface (or if you're using the newer API, extend the RecordReader abstract class) to define how you split your data. Here is a link that gives you an example of how to implement these classes, using the older API.

http://www.questionhub.com/StackOverflow/4235318

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top