Question

I would like to use statprof.py for profiling code in PyPy. Unfortunately, it does not seem to work, the line numbers it points to are off. Does anyone know how to make it work or know of an alternative?

Was it helpful?

Solution

It's likely that "the line numbers are off" because PyPy, in JITted code, will inline many functions and will only deliver signals (here from the timer) at the end of the loops. Compare this with CPython, which delivers the signals between two random bytecodes -- occasionally at the end of the loops too, but generally anywhere. So what you get on PyPy is the same as what you'd get on CPython if you constrained the signal handlers to run only at the "end of loop" bytecode.

This is why this kind of profiling will seem to always miss a lot of functions, like most functions with no loop in them.

You can try to use the built-in cProfile module. It comes of course with a bigger performance hit than statistical profiling, but try it anyway --- it doesn't prevent JITting, for example, so the performance hit should still be reasonable.

More generally, I don't see an easy way to implement the equivalent of statistical profiling in PyPy. It's quite hard to give it sense in the presence of functions that are inlined into each other and then optimized globally... I'd be interested if you can find that a tool actually exists, for some other high-level language, doing statistical profiling, on a VM with a tracing JIT.

We could record enough information to track each small group of assembler instructions back to the real Python function it comes from, and then use hacks to inspect the current Instruction Pointer (IP) at the machine level. Not impossible, but serious work :-)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top