Question

i wanna use all cpu in a python script i find some code same :

def do_sum():
    min = 0
    max = 100000000
    while min < max:
        min += 1
        file = open('mytext.txt','a')
        file.write(str(min))
def main():
    q = Queue()
    p1 = Process(target=do_sum)
    p2 = Process(target=do_sum)
    p1.start()
    p2.start()
    r1 = q.get()
    r2 = q.get()
    print r1+r2

if __name__=='__main__':
    main()

but it's not match cpu together p1 start write from 1,2,3,4,5 .... and p2 not continue p2 also start from begin 1,2,3,4 so result is : 1122334455

how i can match 2 core of cpu together ? i want write file with fastest my PC can do it , i use i7 cpu ,how can i use all

Was it helpful?

Solution

You need a lock mechanism : http://en.wikipedia.org/wiki/Lock_%28computer_science%29 and references for (min, max), not local copies. The multiprocessing lib has already a Lock() object to avoid overwriting and a Value() object to share a mutual state between several process.

from multiprocessing import Queue, Process, Lock,Value

def do_sum(id, counter, lock):
    MAX = 50
    while counter.value < MAX:    

        lock.acquire()
        counter.value += 1

        file = open('mytext.txt','a')
        file.write(str(counter.value))
        file.write("\n")
        file.close()

        lock.release()


def main():

    counter = Value('d', 0.0)
    lock = Lock()

    #f = open('mytext.txt','w')
    #f.close()
    print 'atat'
    q = Queue()
    p1 = Process(target=do_sum, args=(0, counter, lock,) )
    p2 = Process(target=do_sum, args=(1,counter, lock,) )
    p1.start()
    p2.start()
    r1 = q.get()
    r2 = q.get()
    print r1+r2

if __name__=='__main__':
    main()

Anyway, you can harness the power of your cpu all you want, the perfs' bottleneck of your algorithm is located in the I/O operations (which are inherently sequentials).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top