Pregunta

I was looking at this reading of MIT on computational complexity and on minute 15:00 Erik Demaine embarks on a demonstration to show what is stated in the title of this question. However I cannot follow his reasoning, in practice what he says is this:
we can state a decision problem as string of $0$ and $1$ which in practice is the truth table of the function.
He goes on to say that a decision problem is an infinite string of bits while a program is a finite string of bits and up to here no problem. What I don't understand is the continuation of the proof from this point on: Decision problems are in $\mathbb{R}$ cause you can put a decimal point before the string representing the problem, thus obtaining the decimal part of a real

 for example if you have 0111011010101010101... it could become x.0111011010101010101... 

A program is "just" an integer in $\mathbb{N}$ cause it is a finite string of bits. The point that I fail to understand is how it is possible that a decision problem is comparable to a real number instead of an integer ... I mean, if we use the argument of "put a dot in front of the number" could not the same reasoning also be applied to the number of possible algorithms that can ever be produced?

No hay solución correcta

Licenciado bajo: CC-BY-SA con atribución
No afiliado a cs.stackexchange
scroll top