Using Python 3.5.2, numpy.roots
ran out of memory and crashed my Chromebook when I tried to determine the 1,200th roots of unity. The crash occurred when they construct the companion matrix to the polynomial, so I don't think they're using a sparse representation. From the docs:
The algorithm relies on computing the eigenvalues of the companion matrix
If you just need to compute the roots of unity, a trigonometric approach is asymptotically better in both time and space complexity:
def nthRootsOfUnity1(n): # linear space, parallelizable
from numpy import arange, pi, exp
return exp(2j * pi / n * arange(n))
If you're willing to give up on parallelization, you can use generators to achieve constant space and constant time for each additional root, whereas the first method must compute all n roots before returning:
def nthRootsOfUnity2(n): # constant space, serial
from cmath import exp, pi
c = 2j * pi / n
return (exp(k * c) for k in range(n))
On my machine and without using parallelization, these methods compute the 10,000,000th roots in the time it takes numpy.roots
to compute the 1,000th roots:
In [3]: n = 1000
In [4]: %time _ = sum(numpy.roots([1] + [0]*(n - 1) + [-1]))
CPU times: user 40.7 s, sys: 811 ms, total: 41.5 s
Wall time: 10.8 s
In [5]: n = 10000000
In [6]: %time _ = sum(nthRootsOfUnity1(n))
CPU times: user 4.98 s, sys: 256 ms, total: 5.23 s
Wall time: 5.23 s
In [7]: %time _ = sum(nthRootsOfUnity2(n))
CPU times: user 11.6 s, sys: 2 ms, total: 11.6 s
Wall time: 11.6 s