سؤال

I don't know much about NP-complete but read about it here and there. The book Introduction to Algorithm, I'm studying(by myself) states "Although no efficient algorithm for an NP-complete problem has ever been found, nobody has ever proven that an efficient algorithm for one cannot exist." I'm just wondering, how does one know that the algorithm they have is not the most efficient...if that's the efficient one out of all set of algorithms?

Thanks,

Sam

هل كانت مفيدة؟

المحلول

Great question. I'm curious what the answers will be to this question, but here's my try.

It's surprisingly difficult to know if a better algorithm exists then the one you come up with (if you did know that, then your algorithm would be the better one). The question then becomes when should you stop trying to look for a better algorithm. The primary approaches involve coming up with lower bounds for the problem, and then successively making them stronger and stronger.

Imagine you're a researcher

A good start to further investigating this problem is to think about the standard comparison based sorting problem. We are given a list of n elements and want to sort it. So the worst algorithm, is come up with all n! lists, and check which is sorted. Then a more intuitive approach is to use something like bubble sort, which is O(n^2). We wonder if we can still do better though. Well say we use divide and conquer and we come up with merge sort which is O(n log n). Now we are interested in knowing if merge sort is NOT the most efficient. So we spend more time thinking of an even better algorithm, but cannot come up with one. So we get frustrated, and switch our approach and think about proving that there cannot be a comparison sort better than O(n log n).

Now intuitively, a naive lower bound is O(n), simply because in order to sort the list, we at least need O(n) time to read it. But, let us try to improve this lower bound. See if you can come up with the lower bound improvement to O(n log n) if you haven't seen it before. If you can't get it check out this great article that proves that n log n is a lower bound for comparison sorts: http://www.cs.uiuc.edu/~jeffe/teaching/algorithms/notes/19-lowerbounds.pdf

Now let us start thinking about NP Complete problems. Consider vertex cover problem (does there exist a set of k vertices in a graph s.t. each edge is incident to at least one vertex) which is NP Complete. We come up with the most intuitive brute force method for solving it (making all (n choose k) vertex choices and test each possible solution). Now the question is, does something more efficient exist? Well, after much effort, suppose we cannot come up with anything faster. So we take the same approach as before of trying to find good lower bounds and successively improving them. clearly O(n) is one lower bound, but we cannot prove a O(n choose k) time lower bound (if we did prove such a lower bound, then that brute force is the best way to solve vertex cover). So we take a break, and work on other problems.

Then one day we are working on the max independent set problem on graphs (does there exist a set S of k vertices such that no two vertices in S are adjacent). We come up with a brute force solution, but we want to know if this is NOT the most efficient algorithm. However we cannot come up with something better and we can't come up with a tight lower bound, so we cannot say if something faster exists.

Then many days later we see that these problems are actually equivalent, in the sense that an efficient algorithm for one gives an efficient algorithm for the other: http://www.cs.cmu.edu/~ckingsf/bioinfo-lectures/npcomplete.pdf

So although we don't know if the algorithm we have for vertex cover or independent set is the most efficient, we can compare relative degrees of hardness by reducing problems to one another, so that if we find a good algorithm for one we can apply that to the other problem.

TL;DR

Essentially it boils down to the Feynmann approach:

Feynmann Approach to the Problem

In all seriousness, in order to show that our algorithm is or is not the best one:

  1. Find a better algorithm or prove its existence (possibly by reduction to another problem you have seen)
  2. Prove that a better algorithm cannot exist. Possibly by finding Lower Bounds that our algorithm realizes.

If the above two fail, rather then answering definitively, try to come up with a notion of how difficult the problem you are working on is by looking at problems which you can reduce to and thinking about their hardness.

نصائح أخرى

The statement that you cited refers to one of the biggest unsolved problem in computer science, namely whether P = NP or not.

NP are problems, whose solution can be VERIFIED quickly (in polynomial time), while P are the problems whose solution can be FOUND quickly (in polynomial time)

Simply put, if there were an 'efficient' algorithm for NP-complete problem, then P = NP

Throughout all these discussions 'efficient' and 'quickly' almost always mean in polynomial time.

Thus, to answer your question, once you have a non-polynomial algorithm, you know it's not efficient (in the context of the above).

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top