Your multiprocessing code is really over-engineered, and doesn't actually do the work it's supposed to do, anyway. I rewrote it to be simpler, actually do what it's supposed to do, and now it's faster than the simple loop:
import multiprocessing
import time
def add_list(l):
total = 0
counter = 0
for ent in l:
total += ent
counter += 1
return (total, counter)
def split_list(l, n):
# Split `l` into `n` equal lists.
# Borrowed from http://stackoverflow.com/a/2136090/2073595
return [l[i::n] for i in xrange(n)]
if __name__ == '__main__':
start_time = time.time()
numberList = range(1000000):
counter = 0
total = 0
for id in numberList:
total += id
counter += 1
print(counter)
print(total)
print("Finished in Seconds: %s" %(time.time()-start_time))
start_time = time.time()
num_consumers = multiprocessing.cpu_count()
# Split the list up so that each consumer can add up a subsection of the list.
lists = split_list(numberList, num_consumers)
p = multiprocessing.Pool(num_consumers)
results = p.map(add_list, lists)
total = 0
counter = 0
# Combine the results each worker returned.
for t, c in results:
total += t
counter += c
print(counter)
print(total)
print("Finished in Seconds: %s" %(time.time()-start_time))
And here's the output:
Standard:
1000000
499999500000
Finished in Seconds: 0.272150039673
Multiprocessing:
1000000
499999500000
Finished in Seconds: 0.238755941391
As @aruisdante noted, you have a very light workload, so the benefits of multiprocessing aren't really felt fully here. If you were doing heavier processing, you'd see a bigger difference.