Why aren’t distributed computing and/or GPU considered non-deterministic Turing machines if they can run multiple jobs at once?

cs.stackexchange https://cs.stackexchange.com/questions/119284

Question

So we know a nondeterministic Turing machine (NTM) is just a theoretical model of computation. They are used in thought experiments to examine the abilities and limitations of computers. Commonly used to dicuss P vs NP, and how NP problems cannot be solved in polynomial time UNLESS the computation was done on the hypothetical NTM. We also know an NTM would use a set of rules to prescribe more than one action to be performed for any given situation. In other words, attempt many different options simultaneously.

Isn't this what distributed computing does across commodity hardware? Run many different possible calculations in parallel? And the GPU, does this within a single machine. Why isn't this considered an NTM?

Was it helpful?

Solution

In parallel computing, the threads can talk to each other and exchange information during the computation. In nondeterminism, the only "communication" between threads is that we compute the OR of all possible computation paths. This is much more limited.

If you simulate nondeterminism by spawning parallel computations for every nondeterministic choice, you need an exponential number of threads for a polynomial time computation. We know how to build parallel machines in the real world, we don't know how to build nondeterministic ones.

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top