Question

Wikipedia has this to say:

Total functional programming (also known as strong functional programming, to be contrasted with ordinary, or weak functional programming) is a programming paradigm which restricts the range of programs to those which are provably terminating.

and

These restrictions mean that total functional programming is not Turing-complete. However, the set of algorithms which can be used is still huge. For example, any algorithm which has had an asymptotic upper bound calculated for it can be trivially transformed into a provably-terminating function by using the upper bound as an extra argument which is decremented upon each iteration or recursion.

There is also a Lambda The Ultimate Post about a paper on Total Functional Programming.

I hadn't come across that until last week on a mailing list.

Are there any more resources, references or any example implementations that you know of?

Was it helpful?

Solution

If I understood that correctly, Total Functional Programming means just that: Programming with Total Functions. If I remember my math courses correctly, a Total Function is a function which is defined over its entire domain, a Partial Function is one which has "holes" in its definition.

Now, if you have a function which for some input value v goes into an infinite recursion or an infinite loop or in general doesn't terminate in some other fashion, then your function isn't defined for v, and thus partial, i.e. not total.

Total Functional Programming doesn't allow you to write such a function. All functions always return a result for all possible inputs; and the type checker ensures that this is the case.

My guess is that this vastly simplifies error handling: there aren't any.

The downside is already mentioned in your quote: it's not Turing-complete. E.g. an Operating System is essentially a giant infinite loop. Indeed, we do not want an Operating System to terminate, we call this behaviour a "crash" and yell at our computers about it!

OTHER TIPS

While this is an old question, I think that none of the answers so far mention the real motivation for total functional programming, which is this:

If programs are proofs, and proofs are programs, then programs which have 'holes' don't make any sense as proofs, and introduce logical inconsistency.

Basically, if a proof is a program, an infinite loop can be used to prove anything. This is really bad, and provides much of the motivation for why we might want to program totally. Other answers tend to not account for the flip side of the paper. While the languages are techincally not turing complete, you can recover a lot of interesting programs by using co-inductive definitions and functions. We're very prone to think of inductive data, but codata serves an important purpose in these languages, where you can totally define a definition which is infinite (and when doing real computation which terminates, you will potentially use only a finite piece of this, or maybe not if you're writing an operating system!).

It is also of note that most proof assistants work based on this principle, Coq, for example.

Charity is another language that guarantees termination:
http://pll.cpsc.ucalgary.ca/charity1/www/home.html

Hume is a language with 4 levels. The outer level is Turing complete and the innermost layer guarantees termination:
http://www-fp.cs.st-andrews.ac.uk/hume/report/

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top