Theory of computation often involves nondeterministic models of computation. Some examples include nondeterministic finite automata (NFAs), nondeterministic pushdown automata (PDAs), and nondeterministic Turing machines. Real computers, however, are deterministic (or at best, randomized).

What is the point of studying nondeterministic models of computation, given that they are unrealistic?

In particular, what is the point of the complexity class NP? Why should I care about nondeterministic machines running in polynomial time?

没有正确的解决方案

许可以下: CC-BY-SA归因
不隶属于 cs.stackexchange
scroll top