Question

I've tried to debug memory crash in my Python C extension and tried to run script under valgrind. I found there is too much "noise" in the valgrind output, even if I've ran simple command as:

valgrind python -c ""

Valgrind output full of repeated info like this:

==12317== Invalid read of size 4
==12317==    at 0x409CF59: PyObject_Free (in /usr/lib/libpython2.5.so.1.0)
==12317==    by 0x405C7C7: PyGrammar_RemoveAccelerators (in /usr/lib/libpython2.5.so.1.0)
==12317==    by 0x410A1EC: Py_Finalize (in /usr/lib/libpython2.5.so.1.0)
==12317==    by 0x4114FD1: Py_Main (in /usr/lib/libpython2.5.so.1.0)
==12317==    by 0x8048591: main (in /usr/bin/python2.5)
==12317==  Address 0x43CD010 is 7,016 bytes inside a block of size 8,208 free'd
==12317==    at 0x4022F6C: free (in /usr/lib/valgrind/x86-linux/vgpreload_memcheck.so)
==12317==    by 0x4107ACC: PyArena_Free (in /usr/lib/libpython2.5.so.1.0)
==12317==    by 0x41095D7: PyRun_StringFlags (in /usr/lib/libpython2.5.so.1.0)
==12317==    by 0x40DF262: (within /usr/lib/libpython2.5.so.1.0)
==12317==    by 0x4099569: PyCFunction_Call (in /usr/lib/libpython2.5.so.1.0)
==12317==    by 0x40E76CC: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317==    by 0x40E70F3: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317==    by 0x40E896A: PyEval_EvalCodeEx (in /usr/lib/libpython2.5.so.1.0)
==12317==    by 0x40E8AC2: PyEval_EvalCode (in /usr/lib/libpython2.5.so.1.0)
==12317==    by 0x40FD99C: PyImport_ExecCodeModuleEx (in /usr/lib/libpython2.5.so.1.0)
==12317==    by 0x40FFC93: (within /usr/lib/libpython2.5.so.1.0)
==12317==    by 0x41002B0: (within /usr/lib/libpython2.5.so.1.0)

Python 2.5.2 on Slackware 12.2.

Is it normal behavior? If so then valgrind maybe is inappropriate tool for debugging memory errors in Python?

Was it helpful?

Solution

You could try using the suppression file that comes with the python source

Reading the Python Valgrind README is a good idea too!

OTHER TIPS

This is quite common, in any largish system. You can use Valgrind's suppression system to explicitly suppress warnings that you're not interested in.

The most correct option is to tell Valgrind that it should intercept Python's allocation functions. You should patch valgrind/coregrind/m_replacemalloc/vg_replace_malloc.c adding the new interceptors for PyObject_Malloc, PyObject_Free, PyObject_Realloc, e.g.:

ALLOC_or_NULL(NONE,                  PyObject_Malloc,      malloc);

(note that the soname for users allocation functions should be NONE)

Following links given by Nick I was able to find some updates on README.valgrind. In one word, for Python > 3.6, you can set PYTHONMALLOC=malloc environment variable to effectively disable the warnings. For example, in my machine:

export PYTHONMALLOC=malloc
valgrind python my_script.py

doesn't produce any error related to python.

Yes, this is typical. Large systems often leave memory un-freed, which is fine so long as it is a constant amount, and not proportional to the running history of the system. The Python interpreter falls into this category.

Perhaps you can filter the valgrind output to focus only on allocations made in your C extension?

There is another option I found. James Henstridge has custom build of python which can detect the fact that python running under valgrind and in this case pymalloc allocator is disabled, with PyObject_Malloc/PyObject_Free passing through to normal malloc/free, which valgrind knows how to track.

Package available here: https://launchpad.net/~jamesh/+archive/python

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top