No, that's trial division which is much worse than the sieve of eratosthenes in time-complexity. Its space complexity is a bit better, but, since primes are about n/log(n)
you are not saving huge quantities of memory. also the sieve can be done using bit-vectors which reduce the constants by 32/64 times(and thus for practical purposes it might be even better).
Small benchmark that shows the difference in timings:
>>> timeit.timeit('primes_list(1000)', 'from __main__ import primes_list', number=1000)
0.901777982711792
>>> timeit.timeit('erat(1000)', 'from __main__ import erat', number=1000)
0.2097640037536621
As you can see, even with n=1000
eratosthenes is more than 4 times faster.
If we increase the search up to 10000
:
>>> timeit.timeit('primes_list(10000)', 'from __main__ import primes_list', number=1000)
50.41101098060608
>>> timeit.timeit('erat(10000)', 'from __main__ import erat', number=1000)
2.3083159923553467
Now eratosthenes is 21 times faster. As you can see it's clear that eratosthenes is much faster.
using numpy arrays it's quite easy to reduce the memory by 32 or 64(depending on your machine architecture) and obtain much faster results:
>>> import numpy as np
>>> def erat2(n):
... ar = np.ones(n, dtype=bool)
... ar[0] = ar[1] = False
... ar[4::2] = False
... for j in xrange(3, n, 2):
... if ar[j]:
... ar[j**2::2*j] = False
... return ar.nonzero()[0]
...
>>> timeit.timeit('erat2(10000)', 'from __main__ import erat2', number=1000)
0.5136890411376953
An other 4 times faster than the other sieve.