Question

A polynomial-time Turing machine algorithm is considered efficient if its run-time, in the worst-case, is bounded by a polynomial function in the input size. I'm aware of the strong Church-Turing thesis:

Any reasonable model of computation can be efficiently simulated on Turing machines

However, I'm not aware of solid theory for analyzing the computational complexity of algorithms of $\lambda$-calculus.

Do we have a notion of computational efficiency for every known model of computation? Are there any models that are only useful for computability questions but useless for computational complexity questions?

Was it helpful?

Solution

As far as I know, the main models of computability are λ-calculus, Turing machines and recursive functions. I am not aware of the situation regarding complexity in recursive functions, they may or may not be useless for complexity.

It may be seen as a fortunate coincidence that Turing machines, which are not so arguably very inefficient machines, are also a very good model of complexity. What made things natural is that there are a lot of transformations involving TMs that are polynomial. (Universal machine, simulation of a $n$-taped machine with a 1-taped machine, from an arbitrary alphabet to a binary one, simulating a PRAM, ...) and that polynomials are a class of functions stable by arithmetic operations and composition – which makes them a good candidate for complexity theory.

Pure λ-calculus was in itself useless for complexity. However a simple type system came into play and allowed guarantees of termination for some λ-terms in a very easy way. Then some other systems (systems T, F, ..) allowed a great expressiveness while keeping termination.

Efficiency or complexity being a refinement of termination and types being closely related to logic, later came light linear logics which characterizes several classes of complexity. (Elementary, P, and some variations for PSPACE and others). The research in this domain is very active and is not restricted to these complexity classes, and is not even restricted to the λ-calculus.

tl;dr: λ-calculus was useful for computability, termination and complexity theory.

However to give credit where credit is due Turing machines are a good and unanimous way to define what is complexity, but that's true only for loose bounds like "polynomial", not for tight bounds for which PRAM-like models are more suitable.

OTHER TIPS

In terms of time complexity, I think this hard on lambda calculus. The reason is that the unit computational step in lambda calculus is $\beta$-reduction (wikipedia entry): $$(\lambda x.{\rm term})v \to {\rm term}[x := v]$$ All expressions, regardless of their length, would take $1$ computational time step under this model.

About including λ-calculus in the standard complexity model, here is the abstract from some (very) fresh research on the subject. It gives an answer to this question for some restricted form of β-reduction. Basically, complexity in the standard cost model is similar to counting β-reduction steps when restricted to head reduction (which includes call-by-name and call-by-value strategies).

On the Invariance of the Unitary Cost Model for Head Reduction by Beniamino Accattoli and Ugo Dal Lago. (WST2012, link to the proceedings)

The λ-calculus is a widely accepted computational model of higher-order functional programs, yet there is not any direct and universally accepted cost model for it. As a consequence, the computational diffculty of reducing λ-terms to their normal form is typically studied by reasoning on concrete implementation algorithms. Here, we show that when head reduction is the underlying dynamics, the unitary cost model is indeed invariant. This improves on known results, which only deal with weak (call-by-value or call-by-name) reduction. Invariance is proved by way of a linear calculus of explicit substitutions, which allows to nicely decompose any head reduction step in the λ-calculus into more elementary substitution steps, thus making the combinatorics of head-reduction easier to reason about. The technique is also a promising tool to attack what we see as the main open problem, namely understanding for which normalizing strategies the unitary cost model is invariant, if any.

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top