質問

I'm interested in gathering some references that discuss the topic of the relationship between computation and physics.

Specifically, I'm interested in investigating two points of view:

  • our current model of computation -- assuming it describes reality accurately -- impose a limit on what is physically possible. To me this is essentially the argument that Max Tegmark makes with his Mathematical Universe Hypotheses
  • the physics of our universe put limits on our models for computation. Are the limits of our models for computation a direct consequence of the laws of physics?

It seems to me that these two points of view are not the same, however I lack the necessary background to really tell. I know about Tegmark but I don't know any references on the second point of view.

I am aware of the physical limits of computation in the sense that there are the limits of how fast computers can run, or how small we could theoretically build them; however, our models of computation are substrate independent and their fundamental limits are not imposed by physical properties like speed, or scale. The limits more fundamental, e.g., undecidability, the halting problem.

I wonder if these type of limits of our models are a consequence of the structure of brains, i.e. their physical properties.

Perhaps this question is ill-posed but I'm only looking for reading materials so I can hopefully formulate better questions.

役に立ちましたか?

解決

The study of models of computation extremely broadly construed is called "generalized recursion theory" (or "higher recursion theory," or "generalized/higher computability theory," or etc.). We can indeed rigorously define and study (from a theoretical standpoint of course) models of computation much stronger than what is "physically implementable" in even an ideal sense (according to our current understanding of physics). Some of these include:

  • Metarecursion theory, which develops an analogy between "computably enumerable" and "$\Pi^1_1$" (and extends to descriptive set theory by developing an analogy between "computably enumerable" and "coanalytic," and is further generalizable e.g. by Spector classes and Q-theory).

  • $\alpha$-recursion theory, where we shift from computing in the context of $\mathbb{N}$ to computing in the context of more complicated infinite sets with nice properties (and $\beta$-recursion and admissible recursion theory, which generalize $\alpha$-recursion in different directions).

  • Infinite-time Turing machines, which ... are basically exactly what they sound like.

  • Ordinal register machines, which are like infinite-time Turing machines but even more so; roughly, they're the analogue of Turing machines when we replace $\mathbb{N}$ by the class of all ordinals.

  • Kleene recursion, where we look at computations of "higher type" objects generally allowing the use of "lower type" objects (and their "equality predicates") as parameters.

  • Recursion over general structures, which is too broad to have a single name; see e.g. Blum-Shub-Smale machines (which really are a special case of previously-existing notions) or this book by Moschovakis.

There are also models of computation stronger than Turing machines which are based on "idealized" versions of phenomena which, while extremely exotic, are (somewhat) consistent with our current understanding of physics - e.g. Malament-Hogarth.

Moreover, we can prove abstract results applicable to all models of computation satisfying some very basic properties. For example, the unsolvability of the halting problem generalizes to any situation where we have a notion of "universal machine." (Of course anything we can conceive has to be "permitted" by the laws of physics to which our physical brains are subject, but my point is that these limits are quite loose: we can conceive of blatantly un-physical objects.)

ライセンス: CC-BY-SA帰属
所属していません cs.stackexchange
scroll top