From wikipedia

In computational complexity theory, a numeric algorithm runs in pseudo-polynomial time if its running time is a polynomial in the length of the input (the number of bits required to represent it) and the numeric value of the input (the largest integer present in the input). In general, the numeric value of the input is exponential in the input length, which is why a pseudo-polynomial time algorithm does not necessarily run in polynomial time with respect to the input length.

From this SO post that was very helpful.

An algorithm runs in pseudo-polynomial time if the runtime is some polynomial in the numeric value of the input, rather than in the number of bits required to represent it.

And this MIT OCW lecture on the knapsack DP running time and definition of pseudo-polynomial time.

They are all talking about the same thing in that a running time is pseudo-polynomial when it is a multivariate polynomial of $(x, k)$ where $x$ denotes the size of the input and $k$ is some integer(numeric value) involved. Since $k$ is often exponential in $x$, pseudo-polynomial time is not necessarily polynomial time in $x$, the number of bits needed to represent the input. Mathematically, $k = \Theta(2^{x})$, 2 is replaceable by a constant.

This works absolutely fine when we have a numeric value $k$ as the input with running time $\Theta(k)$. Then $x = \Theta(\log k)$ and we have running time $\Theta(2^{x})$. Everything is clean.

And then I start to see more complicated issues in other algorithms. For example, in the knapsack case, let $N$ denote the number of things or objects in the input. As described in the MIT video, the input is $x = \Theta(NlogS)$ and the running time is $\Theta(NS)$. The instructor describes that since $S$ is exponential in the input size $\log S$ (the number of bits needed to represent the numeric value), we can see how knapsack running time is not polynomial time. $\log S$ is only a part of $x$, so I don't see how $\Theta(N\cdot 2^{\log S})$ is exponential in $\Theta(N\log S)$. How is running time expressed in the form $O(c^{x})$ where $c$ is a constant?

Something similar happens for counting sort running time. The SO post gives running time $O(N + k)$. It gives the same logic as above to say that the running time is exponential in the input size $\log k$. But to me it seems like $x = \Theta(N\log K)$ since the input is a list of $N$ things with the maximum value $k$. I don't really see how $O(N + k)$ can be exponential in $x$ just because $k$ is exponential in the number of bits needed to represent it, since $\log k$ is definitely not $x$, only a part of it. Same thing: running time doesn't seem exponential in $x$, rather only in $\log S$.

So either I am still not getting what it means to have exponential "length of the input" is or there is some conflict with how people define length of the input. MIT video defines input size similar to how I think of $x$ while many other sources have a different opinion it seems. I see it as the total number of bits needed to represent the whole input, but some analysis just sees it as the number of bits needed to represent a numeric value that is only a part of the whole input. Is running time exponential in the input size if it is exponential in the size of a part of the input or the whole thing?? Please clarify!!

没有正确的解决方案

许可以下: CC-BY-SA归因
不隶属于 cs.stackexchange
scroll top