Reference: Page 4 of this document

Given two polynomials $p(x)$ and $q(x)$ each of degree $n$ represented in coefficient form, it takes $\mathcal{\Theta}(n)$ time to evaluate given some input $x$.

In the reference linked, in order to get product of these two polynomials, $c(x) = p(x)q(x)$, via brute force, we have to compute new coefficients via convolution $\left(c_k = \sum_{i=0}^k a_i b_{k-i} \right)$ which takes $\mathcal{\Theta}(n^2)$ time.

However, if we wanted to evaluate this product with some given input $x$, we can evaluate the two polynomials $p(x)$ and $q(x)$ separately and then multiply the two scalar results. Overall, this would take $\mathcal{\Theta}(n)$ time for two evaluations and a scalar multiplication. This lets us avoid having to do convolutions and would be faster than $\mathcal{\Theta}(n^2)$.

Question: When it comes to evaluation of the product of two polynomials, is it actually $\mathcal{\Theta}(n)$ using the method above or are we somehow still doomed to $\mathcal{\Theta}(n^2)$?

I'm aware there's a faster way to do this in $\mathcal{O}(n \log n)$ time via Fast Fourier Transform, but I'm more curious about the necessity of the convolution calculations. I suspect it's more if we care about getting the coefficients of the product than the actual evaluation, as in we more care about knowing $\{c_0, c_1, c_2, \dotsc\}$ than $c(x) = p(x)q(x)$.

没有正确的解决方案

许可以下: CC-BY-SA归因
不隶属于 cs.stackexchange
scroll top