Question

Why in computer science any complexity which is at most polynomial is considered efficient?

For any practical application(a), algorithms with complexity $n^{\log n}$ are way faster than algorithms that run in time, say, $n^{80}$, but the first is considered inefficient while the latter is efficient. Where's the logic?!

(a) Assume, for instance, the number of atoms in the universe is approximately $10^{80}$.

Was it helpful?

Solution

Another perspective on "efficiency" is that polynomial time allows us to define a notion of "efficiency" that doesn't depend on machine models. Specifically, there's a variant of the Church-Turing thesis called the "effective Church-Turing Thesis" that says that any problem that runs in polynomial time on on kind of machine model will also run in polynomial time on another equally powerful machine model.

This is a weaker statement to the general C-T thesis, and is 'sort of' violated by both randomized algorithms and quantum algorithms, but has not been violated in the sense of being able to solve an NP-hard problem in poly-time by changing the machine model.

This is ultimately the reason why polynomial time is a popular notion in theoryCS. However, most people realize that this does not reflect "practical efficiency". For more on this, Dick Lipton's post on 'galactic algorithms' is a great read.

OTHER TIPS

In theory, we care for the asymptotic behaviour, and describe classes of problems and algorithms based on their asymptotic behaviour. The keyword here is asymptotic. $O(n^{80})$ is faster than $O(n^{\log n})$ asymptotically, i.e., starting from $n > 1208925819614629174706176$ (which by the way is called: septillion!), assuming unit constant coefficients, and no low-order terms.

In practice, however, attention is paid to both exponents and constant coefficients. In practices, input sizes can not grow to septillions, so, yes, $n^{\log n}$ for all purposes will be a superior choice to $n^{80}$. Other factors also matter in practices: parallelism, memory access patterns (e.g. locality).

For example, most libraries for integer multiplication, e.g. GMP will implement a mixture of algorithms, and select inferior algorithm based on the input size select the practically superior algorithms based on the input size, though these algorithms might be asymptotically inferior. Some asymptotically "inferior" algorithms will be faster on certain input sizes, and will be selected over the optimal algorithms.

Another example, the fastest matrix multiplication algorithm known is Coppersmith-Winograd algorithm which runs in $O(n^{2.3737})$ (there are recent improvements; more here). However, it was never implemented because (1) it's hard (2) the constant coefficient is gigantic. All linear algebra packages use the less optimal of Strassen.

TL;DR theory cares for asymptotic behaviour in order to compare algorithms as the limit of input size goes to arbitrarily large numbers.

This answer will look at the "bigger picture" context of your question. Computer science is actually a relatively young and somewhat open science and it doesn't yet have great or even good answers to some basic & fundamental questions. The basic question "what is efficiently computed" is either accurately or roughly formalized in CS (depending on opinion) as the famous P vs NP problem (or the closely related P vs Exptime problem), and its still open after more than four decades of being initially introduced by Cook/Levin ~1970 and intense work by the worlds greatest computer scientists (and many mathematicians are also interested in the problem as fundamental).

So in other words, even with a rough definition of "efficient" as P time, and one of the highest valued scientific awards — namely $1M award attached to the problem for over 10yrs — computer science cannot even prove that some problems (close to this borderline) must or must not have efficient (Ptime) algorithms. Therefore the exact definition of "efficient" more precise than P time is not necessary or even possible at this time. If/when the P vs NP conjecture is settled one way or the other, a more strict definition of "efficient" may or presumably will be possible.

Moreover, one might feel that the Ptime definition of "efficient" might even be a bit "sloppy", and most computer scientists would probably agree, and almost all of them think the P vs NP conjecture is of the utmost importance to resolve, to the point that they might even regard this assertion or observation as trivial.... in other words, so to speak, its a work in progress / we're working on it. (in fact mainstream computer scientists even go so far, only half-jokingly, to routinely refer to the gap & lack of progress/definitive separations as embarrassing.)

In fact there is even a closely related/significantly stronger conjecture than P vs NP, namely NP vs P/poly, which also cannot be resolved by computer science at this time. it conjectures that NP-time problems cannot be solved by any "P-sized" circuits, ie not even restricted to those circuits that can be created by algorithms/Turing machines.

As for how hard P vs NP may be — there is some solid reason to think it may be at least as hard as the very old Riemann conjecture in mathematics (now 1.5 century old), because both have had the same $1M award for over a decade, and neither has been solved yet/first.

So in other words, to precisely define what algorithms are really "efficient" is actually one of the most important & hardest existing open problems in theoretical science and mathematics.

In fact the question of "what is efficiently computed" is actually even more subtle, because there is a variant of the Church-Turing thesis called the P-time CT thesis, and it is not known if quantum computing actually violates it. With Shor's breakthrough result of P-time QM, factoring considered a dramatic twist in this research. In other words, the problem of what is efficiently computed actually plausibly descends all the way to deep physics principles, and relates to whether quantum computing can compute more efficiently than classical computation, which is also a generally open problem in theoretical CS and advanced physics.

So one can even add that P vs NP & the question of efficient computing may be of crucial or fundamental importance to in — addition to CS and mathematics — physics.

[1] P vs NP problem, wikipedia

[2] Millenium prize problems

[3] P/Poly class, wikipedia

[4] Shor's algorithm

Polynomial time algorithms are considered efficient only in comparison with the hardest non-polynomial time especially the so called NP-Complete. See image: Euler diagram for P, NP, NP-complete, and NP-hard set of problems.

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top