I have some code that looks like this:

for item in list:
    <bunch of slow python code, depending only on item>

I want to speed this up by parallelizing the loop. Normally the multiprocessing module would be perfect for this (see the answers to this question), but it was added in python 2.6 and I'm stuck using 2.4.

What's the best way to parallelize a python loop in python 2.4?

有帮助吗?

解决方案

You might be looking for "fork," which will make it easy to use the specific item.

Your for loop will need to look a little different, though -- you want to break out as soon as fork returns zero.

import os

L = ["a", "b", "c"]

for item in L:
    pid = os.fork()
    if pid == 0: break
    else: print "Forked:", pid

if pid != 0: print "Main Execution, Ends"
else: print "Execution:", item

其他提示

I'm not familiar with using python 2.4, but have you tried using subprocess.Popen and just spawning new processes?

from subprocess import Popen

for x in range(n):
    processes.append(Popen('python doWork.py',shell = True))

for process in processes: 
    process.wait()
许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top