well, in your code there's no obvious reason why it's taking that much memory, though
it's a better idea to use context managers to close the files when the function ends.
As you iterate over the files until one of them fails, it's also a good idea to iterate
both iterators at once. Also note that if you're using python3, you shall use zip()
instead of izip()
, as izip()
is deprecated.
try:
with open(myfile1, 'r') as file1:
with open(myfile2, 'r') as file2:
with open(result_file, 'w') as file3:
for l1, l2 in izip(file1, file2):
if (l1 != l2):
a_line = l1.strip().split(',')
b_line = l2.strip().split(',')
if(a_line[0] != b_line[0] or a_line[1] != b_line[1]):
raise Exception("error {} could not match {}".format(a_line, b_line))
error = abs(float(a_line[2]) - float(b_line[2]))/float(a_line[2])
result_file1.write("{},{},{},{},{}\n".format(a_line[0],
a_line[1],
a_line[2],
b_line[3],
rel_error*100)
except Exception as err:
print err
given the code I'm giving here, you should never have more than one line of each file loaded in memory at once. If you still get memory issues, it's either something else in your code that may be misbehaving, or your system not deallocating the memory fast enough.