First off, it looks like your you're using multi-processing and not threading. Those two behave quite differently and I suggest you look into that. Anyway, for the problem at hand, the cause is elsewhere.
The arp2
method is executed in parallel. I see two problems in that method:
print "%s \t %s \t %s" % (mac, ipaddr, line[8:])
This statement prints to standard-output. In our code it might be executed by up to 12 processes simultaneously. Python gives you no gurantees that the print
statement is atomic. It might very well happen that one process has written half the line when the next process writes his line. In short, you get a mess in the output.
The same holds for
with open( 'C:\Python26\ARPips.prn', 'w+') as f:
f.write("\n".join(map(lambda x: str(x), ips)) + "\n")
Again there are no guarantees that the processes won't step on each others toes. The file content might get scambled.
The easiest solution is not to do any file or console output inside the arp2
method. Instead return the results. pool.map
will collect those results for you in a safe manner. It behaves like the regular map
function. Then you can output them to files and console.
If you want output while the scan is running you have to synchronize the processes(for example with multiprocessing.Lock
. So that only one process is ever writing/printing at the same time.
Also:
Put an 'r' in front of string literals with windows-style paths:
x = r'C:\Users\TomVB\Desktop\OID2.prn'
. Backslashes are used for escaping in Python.Load the content of
C:\Users\TomVB\Desktop\OID2.prn
into adict
. It will be much faster.map(lambda x: str(x), ips)
is equal tomap(str, ips)