Question

I have written a python program that needs to run for multiple days at a time, because of the constant collection of data. Previously I had no issues running this program for months at a time. I recently made some updates to the program, and now after around 12 hours I get the dreaded out of memory killer. The 'dmesg' output is the following:

[9084334.914808] Out of memory: Kill process 2276 (python2.7) score 698 or sacrifice child
[9084334.914811] Killed process 2276 (python2.7) total-vm:13279000kB, anon-rss:4838164kB, file-rss:8kB

Besides just general python coding, the main change made to the program was the addition of a multiprocessing Queue. This is the first time I have ever used this feature, so I am not sure if this might be the cause of the issue. The purpose of the Queue in my program is to be able to make dynamic changes in a parallel process. The Queue is initiated in the main program and in continually being monitored in the parallel process. A simplified version of how I am doing this in the parallel process is the following (with 'q' being the Queue):

while(1):

    if q.empty():
        None

    else:
        fr = q.get()
        # Additional code

    time.sleep(1)

The dynamic changes to 'q' do not happen very often so majority of the time q.empty() will be true, but the loop is there to be ready as soon as changes are made. My question is, would running this code for multiple hours at a time cause the memory to eventually run low? With the 'while' loop being pretty short and running basically non stop, I was thinking this might be an problem. If this could be the cause of the problem, does anybody have any suggestions on how to improve the code so the out of memory killer doesn't get called?

Thank you very much.

Was it helpful?

Solution

The only way you can run out of memory in the way you describe is if you're using more and more memory as time goes on. The loop here does not demonstrate this behavior, so it cannot be (solely) responsible for any memory errors. Running a tight, infinite loop can burn through a lot of needless processor cycles, but it can't cause a MemoryError by itself unless it's storing data to someplace else.

It's likely that elsewhere in your code, you're holding onto some variables that you don't intend to. This is called a memory leak, and you can use a memory profiler to look for where such a leak is coming from.

Some likely suspects are caching methods used to improve performance, or lists of variables that never leave scope. Perhaps your multiprocessing queue is holding on to references to earlier data objects, or items are never deleted from the queue once they're inserted? (This latter case is unlikely given the code you've shown if you're using the builtin queue.Queue, but anything is possible).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top