Pergunta

When I work on an algorithm to solve a computing problem, I often experience that speed can be increased by using more memory, and memory usage can be decreased at the price of increased running time, but I can never force the product of running time and consumed memory below a clearly palpable limit. This is formally similar to Heisenberg's uncertainty principle: the product of the uncertainty in position and the uncertainty in momentum of a particle cannot be less than a given threshold.

Is there a theorem of computer science, which asserts the same thing? I guess it should be possible to derive something similar from the theory of Turing Machines.

(I asked this question originally on StackOverflow.)

Foi útil?

Solução

This phenomenon is known in computer science as a time-space tradeoff. For example, Ryan Williams proved that if a computer solves SAT on instances of size $n$ in time $T$ and using memory $S$ then $T \cdot S \geq n^{1.8}$. (We don't expect this to be tight.) A classical result states that when recognizing palindromes on a Turing machine, $T \cdot S = \Omega(n^2)$.

Licenciado em: CC-BY-SA com atribuição
Não afiliado a cs.stackexchange
scroll top