Question

Comparison-based sorting algorithms are lower bounded by $\Omega(nlogn)$, where $n$ is the number of elements in the input list.

However, when dealing with different models of computation, such as algorithms that deal with arithmetic operations (Euclid-GCD) or, $n$, the size of the input, is no longer the number of elements but the total number of bits required to represent the input. In these cases, time complexity is measured as the number of bit operations needed with respect to the total number of bits in the input.

Looking at the post What Is Pseudo-Polynomial Time How Does It Differ From Polynomial Time?

Polynomial time is defined as:

An algorithm runs in polynomial time if its runtime is O(xk) for some constant k, where x denotes the number of bits of input given to the algorithm.

Pseudo-polynomail time is defined as:

An algorithm runs in pseudo-polynomial time if the runtime is some polynomial in the numeric value of the input, rather than in the number of bits required to represent it.

Looking at How Can We Assume That Basic Operations on Numbers Take Constant Time?

For word size $w = \theta(logn)$, standard arithmetic operations on these words take $O(1)$ time, by definition.

With these definitions in mind, I have trouble seeing how some comparison-based sorting algorithms are polynomial time with respect to both $n$, the number of elements in the input, and $x$, the total number of bits in the input; the first post I referenced makes this claim.

Why they are polynomial with respect to $n$ is pretty obvious, so I will only look at the bit representation of input $x$ for comparison-based sorting algorithms with 2 different time complexities.

Merge or Quick Sort on a List of $n$ Elements

We need $w = \theta(logn)$, as defined in CLRS, since we at least need to be able to index into an array of size $n$. Assuming all elements fit in a word, which I think is a reasonable assumption here, $x = wn = \Theta(nlogn)$. Please let me know if my $x$ is wrong!!!

Since the running time was $O(nlogn)$, the time complexity with respect to $x$ is $O(x)$. I will interpret this as meaning we need a linear number of operations on the total number of bits needed to represent this input list.

Selection Sort on a List of $n$ Elements:

I will assume the same $x = wn = \Theta(nlogn)$ for the input size. The running time is $O(n^2)$. Here I need some help: how do I write the running time cleanly in terms of $x$, like I did for Merge/Quick Sort above? The first post I referenced just used $w = 32$ bits and easily got $O(x^2)$ polynomial time but this doesn't seem right to me, since the [second post] I referenced (How can we assume that basic operations on numbers take constant time?) and CLRS stressed $w$ is not a constant but $\Theta(logn)$.

I have a similar question for counting sort, that has not been answered yet.. the first post I referenced claims it is exponential with respect to $x$ without the $k = \Theta(n)$ condition when $x = \Theta(nlogk)$ bits, but I have trouble changing from $O(n + k)$ in terms of $x$. Please help!!

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top