Question

I've seen plenty of statements in papers and on websites that Fast Fourier Transform-based multiplication algorithms are slower than other multiplication algorithms for relatively small input size N, and I've seen plenty of data in papers and on websites demonstrating that this is the case, but nothing I've come across has bothered to explain what causes FFT to be slower for small N.

I would guess that the overhead is due to getting the input into a form that FFT can swallow, but I'd like to know what the actual cause of the overhead is and whether it can be reduced. Note that I'm not talking about switching from some FFT implementation to another method when N is below a certain size as many implementations do, but the source of overhead in the FFT itself and what can be done to reduce it.

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top