Question

I hear a lot about map/reduce, especially in the context of Google's massively parallel compute system. What exactly is it?

Was it helpful?

Solution

From the abstract of Google's MapReduce research publication page:

MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key.

The advantage of MapReduce is that the processing can be performed in parallel on multiple processing nodes (multiple servers) so it is a system that can scale very well.

Since it's based from the functional programming model, the map and reduce steps each do not have any side-effects (the state and results from each subsection of a map process does not depend on another), so the data set being mapped and reduced can each be separated over multiple processing nodes.

Joel's Can Your Programming Language Do This? piece discusses how understanding functional programming was essential in Google to come up with MapReduce, which powers its search engine. It's a very good read if you're unfamiliar with functional programming and how it allows scalable code.

See also: Wikipedia: MapReduce

Related question: Please explain mapreduce simply

OTHER TIPS

MapReduce Explained.

It explains better than what I can. Does it help?

Map is a function that applies another function to all the items on a list, to produce another list with all the return values on it. (Another way of saying "apply f to x" is "call f, passing it x". So sometimes it sounds nicer to say "apply" instead of "call".)

This is how map is probably written in C# (it's called Select and is in the standard library):

public static IEnumerable<R> Select<T, R>(this IEnumerable<T> list, Func<T, R> func)
{
    foreach (T item in list)
        yield return func(item);
}

As you're a Java dude, and Joel Spolsky likes to tell GROSSLY UNFAIR LIES about how crappy Java is (actually, he's not lying, it is crappy, but I'm trying to win you over), here's my very rough attempt at a Java version (I have no Java compiler, and I vaguely remember Java version 1.1!):

// represents a function that takes one arg and returns a result
public interface IFunctor
{
    object invoke(object arg);
}

public static object[] map(object[] list, IFunctor func)
{
    object[] returnValues = new object[list.length];

    for (int n = 0; n < list.length; n++)
        returnValues[n] = func.invoke(list[n]);

    return returnValues;
}

I'm sure this can be improved in a million ways. But it's the basic idea.

Reduce is a function that turns all the items on a list into a single value. To do this, it needs to be given another function func that turns two items into a single value. It would work by giving the first two items to func. Then the result of that along with the third item. Then the result of that with the fourth item, and so on until all the items have gone and we're left with one value.

In C# reduce is called Aggregate and is again in the standard library. I'll skip straight to a Java version:

// represents a function that takes two args and returns a result
public interface IBinaryFunctor
{
    object invoke(object arg1, object arg2);
}

public static object reduce(object[] list, IBinaryFunctor func)
{
    if (list.length == 0)
        return null; // or throw something?

    if (list.length == 1)
        return list[0]; // just return the only item

    object returnValue = func.invoke(list[0], list[1]);

    for (int n = 1; n < list.length; n++)
        returnValue = func.invoke(returnValue, list[n]);

    return returnValue;
}

These Java versions need generics adding to them, but I don't know how to do that in Java. But you should be able to pass them anonymous inner classes to provide the functors:

string[] names = getLotsOfNames();

string commaSeparatedNames = (string)reduce(names, 
   new IBinaryFunctor {
       public object invoke(object arg1, object arg2)
           { return ((string)arg1) + ", " + ((string)arg2); }
   }

Hopefully generics would get rid of the casts. The typesafe equivalent in C# is:

string commaSeparatedNames = names.Aggregate((a, b) => a + ", " + b);

Why is this "cool"? Simple ways of breaking up larger calculations into smaller pieces, so they can be put back together in different ways, are always cool. The way Google applies this idea is to parallelization, because both map and reduce can be shared out over several computers.

But the key requirement is NOT that your language can treat functions as values. Any OO language can do that. The actual requirement for parallelization is that the little func functions you pass to map and reduce must not use or update any state. They must return a value that is dependent only on the argument(s) passed to them. Otherwise, the results will be completely screwed up when you try to run the whole thing in parallel.

After getting most frustrated with either very long waffley or very short vague blog posts I eventually discovered this very good rigorous concise paper.

Then I went ahead and made it more concise by translating into Scala, where I've provided the simplest case where a user simply just specifies the map and reduce parts of the application. In Hadoop/Spark, strictly speaking, a more complex model of programming is employed that require the user to explicitly specify 4 more functions outlined here: http://en.wikipedia.org/wiki/MapReduce#Dataflow

import scalaz.syntax.id._

trait MapReduceModel {
  type MultiSet[T] = Iterable[T]

  // `map` must be a pure function
  def mapPhase[K1, K2, V1, V2](map: ((K1, V1)) => MultiSet[(K2, V2)])
                              (data: MultiSet[(K1, V1)]): MultiSet[(K2, V2)] = 
    data.flatMap(map)

  def shufflePhase[K2, V2](mappedData: MultiSet[(K2, V2)]): Map[K2, MultiSet[V2]] =
    mappedData.groupBy(_._1).mapValues(_.map(_._2))

  // `reduce` must be a monoid
  def reducePhase[K2, V2, V3](reduce: ((K2, MultiSet[V2])) => MultiSet[(K2, V3)])
                             (shuffledData: Map[K2, MultiSet[V2]]): MultiSet[V3] =
    shuffledData.flatMap(reduce).map(_._2)

  def mapReduce[K1, K2, V1, V2, V3](data: MultiSet[(K1, V1)])
                                   (map: ((K1, V1)) => MultiSet[(K2, V2)])
                                   (reduce: ((K2, MultiSet[V2])) => MultiSet[(K2, V3)]): MultiSet[V3] =
    mapPhase(map)(data) |> shufflePhase |> reducePhase(reduce)
}

// Kinda how MapReduce works in Hadoop and Spark except `.par` would ensure 1 element gets a process/thread on a cluster
// Furthermore, the splitting here won't enforce any kind of balance and is quite unnecessary anyway as one would expect
// it to already be splitted on HDFS - i.e. the filename would constitute K1
// The shuffle phase will also be parallelized, and use the same partition as the map phase.  
abstract class ParMapReduce(mapParNum: Int, reduceParNum: Int) extends MapReduceModel {
  def split[T](splitNum: Int)(data: MultiSet[T]): Set[MultiSet[T]]

  override def mapPhase[K1, K2, V1, V2](map: ((K1, V1)) => MultiSet[(K2, V2)])
                                       (data: MultiSet[(K1, V1)]): MultiSet[(K2, V2)] = {
    val groupedByKey = data.groupBy(_._1).map(_._2)
    groupedByKey.flatMap(split(mapParNum / groupedByKey.size + 1))
    .par.flatMap(_.map(map)).flatten.toList
  }

  override def reducePhase[K2, V2, V3](reduce: ((K2, MultiSet[V2])) => MultiSet[(K2, V3)])
                             (shuffledData: Map[K2, MultiSet[V2]]): MultiSet[V3] =
    shuffledData.map(g => split(reduceParNum / shuffledData.size + 1)(g._2).map((g._1, _)))
    .par.flatMap(_.map(reduce))
    .flatten.map(_._2).toList
}

Map is a native JS method that can be applied to an array. It creates a new array as a result of some function mapped to every element in the original array. So if you mapped a function(element) { return element * 2;}, it would return a new array with every element doubled. The original array would go unmodified.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map

Reduce is a native JS method that can also be applied to an array. It applies a function to an array and has an initial output value called an accumulator. It loops through each element in the array, applies a function, and reduces them to a single value (which begins as the accumulator). It is useful because you can have any output you want, you just have to start with that type of accumulator. So if I wanted to reduce something into an object, I would start with an accumulator {}.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce?v=a

MapReduce:

To run something large, we can use computation power of different computer in our office. Difficult part is to split task among different computers.It is done by MapReduce library.

The basic idea is that you divide the job into two parts: a Map, and a Reduce. Map basically takes the problem, splits it into sub-parts, and sends the sub-parts to different machines – so all the pieces run at the same time. Reduce takes the results from the sub-parts and combines them back together to get a single answer.

Input is a list of records.Result of the map computation is a list of key/value pairs. Reduce takes each set of values that has the same key, and combines them into a single value.You can’t tell whether the job was split into 100 pieces or 2 pieces; the end result looks pretty much like the result of a single map.

Please looks at simple map and reduce program:

Map function is used to apply some function on our original list and a new list is hence generated. The map() function in Python takes in a function and a list as argument. A new list is returned by applying function to each item of list.

li = [5, 7, 4, 9] 
final_list = list(map(lambda x: x*x , li)) 
print(final_list)  #[25, 49, 16, 81]

The reduce() function in Python takes in a function and a list as argument. The function is called with a lambda function and a list and a new reduced result is returned. This performs a repetitive operation over the pairs of the list.

#reduce func to find product/sum of list
x=(1,2,3,4)
from functools import reduce
reduce(lambda a,b:a*b ,x) #24
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top