The difference between the time
and timeit
timings is because
By default,
timeit()
temporarily turns off garbage collection during the timing.
When you allocate a lot of memory, normally the cyclic garbage collector will kick in to see if it can reclaim some of that. To get more consistent timings, timeit
disables this behavior for the duration of the timing.
Compare the timings with time
, with and without garbage collection:
>>> def t1():
... s = time.time()
... content = [node(i, i) for i in range(1000000)]
... print time.time() - s
...
>>> t1()
3.27300000191
>>> gc.disable()
>>> t1()
1.92200016975
to the timings with timeit
, with and without garbage collection:
>>> gc.enable()
>>> timeit.timeit('content = [node(i, i) for i in range(1000000)]', 'from __main
__ import node; import gc; gc.enable()', number=1)
3.2806941528164373
>>> timeit.timeit('content = [node(i, i) for i in range(1000000)]', 'from __main
__ import node', number=1)
1.8655694847876134
As you can see, both methods produce the same timing with the same GC settings.
As for the command line time
command, that includes the entire runtime of the program, including interpreter setup and teardown and other parts the other timings don't include. I suspect one of the big contributors to the difference is the time taken to free all the node
objects you allocated:
>>> def t2():
... s = time.time()
... [node(i, i) for i in range(1000000)]
... # List and contents are deallocated
... print time.time() - s
...
>>> t2()
3.96099996567