Question

I found that the latter was much more efficient (orders of magnitude faster). Is there any reason for this? It was done in Python 2.7.

block = data[y * block_length:y * (block_length + 1)] # Slow

vs.

block = [data[y * block_length + z] for z in xrange(block_length)] # Fast

EDIT:

Using Numpy (this could be the cause), see the code at http://pastebin.com/88KkWd79 Run it with time python test.py a or time python test.py b, as the power gets larger, function b begins to take much, much longer.

Was it helpful?

Solution

The first one should be much faster. However, note that those two lists are not equivalent. The first one has y entries, while he second one has block_length entries. If y is very large (i.e. because you are splitting a very long list into relatively small blocks) this could account for the difference in running time.

Probably you meant this instead:

block = data[y * block_length : (y + 1) * block_length]
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top