Question

Currently, I haven't learned about a functional language that can achieve the same performance as C/C++. And I have learned that some languages that favor functional programming to imperative programming, such as Scala and Rust, use imperative ways to implement their library functions for better efficiency.

So here comes my question, on today's comptuters that execute imperative instructions, is this a limitation of the compiler or functional programming itself? For every imperative function with no side effects, either in a language without GC such as C/C++/Rust/assembly or one with GC such as Java, is there a pure functional counterpart in Haskell, Scala, etc. that can be compiled to run with identical performance in time and space (not just asymptotic but exactly the same) or even to the same instructions, with an optimal functional compiler that utilizes all modern and even undiscovered optimization techniques such as tail recursion, laziness, static analysis, formal verification, and so on which I don't know about?

I am aware of the equivalence between λ-computable and Turing computable, but but I couldn't find an answer to this question online. If there is, please share a compiler example or a proof. If not, please explain why and show a counter-example. Or is this a non-trivial open question?


Supplementary edits as suggested by Andrej Bauer:

To be more specific, here are 2 examples that led into thinking about this question.

For example, tail recursion and accumulators can improve the performance of some recursive functions to be identical to an imperative implementation using for. In some cases they might even have the same instructions.

Another example is about laziness in Haskell. Laziness can help avoid unnecessary copying of data structures and make performance better. However, laziness leaves wrappings, closures, etc. in the runtime and still can't make the program as fast as an imperative implementation where there are no such things. So I am wondering whether such things can be statically removed before runtime during compilation.

Supplementary edits as suggested by Nearoo:

The question can also be stated in this way: is there a language that supports both pure functional programming and imperative programming, paired with an optimized compiler, in which every function with no side effects implemented imperatively can be replaced with one implemented purely functionally, at no cost of performance or even compiled to the same instructions?

Was it helpful?

Solution

  1. Performance is not a property of language, it is a property of specific programs within a language. Some languages might be very fast at some things and slow at others.

    For example, Chez-Scheme can sometimes find performance comparable to C, not because the language is more efficient, but because defensive practices that programmers often use in C are less necessary in scheme.

    Likewise, there are times when Haskell can do very efficient parallelism or concurrency, not because it is faster than C, but because the immutability guarantees of the language allow the programmer to avoid things like locks and other synchronization techniques.

    And, performance varies from implementation to implementation. I can hand-roll a C interpreter, but it will certainly not be fast. C is not fast, GCC and Clang are.

  2. What is considered a "functional" language? Every practical functional language has some facilities for mutable state: OCaml, Haskell, Scala, Lisp, Scheme, etc. Tail recursion gives something roughly equivalent to mutation inside of a for-loop. But when this is not enough, functional languages give the programmer access to mutable state. In the case of Haskell, this is controlled by the type system, so there is never implicit mutable state, but mutation is very much allowed and even encouraged in Haskell. Look at any Haskell code, and you will see the IO monad everywhere. Likewise, ML languages distinguish between types T and ref T, so you can tell by the types whether a variable is mutable or not.

  3. There is no "optimal" compiler, thanks to Rice's theorem. All compilers, imperative or functional, are "best effort:" the use conservative approximations to optimize code.

    If we had an optimal compiler, the answer would be that every program always ran using the most efficient instructions possible, and it wouldn't matter what language you coded your problem in. The optimal sequence of instructions computing a problem does not depend on the source language. But if we had this, then this compiler would compile every non-halting program into while(0), which means we could detect non-halting programs, which is impossible.

  4. For Turing Machines and the lambda calculus, I think it is fairly trivial to implement a lambda calculus interpreter for Turing Machines that is asymptotically equivalent to a universal Turing-machine. Big-O complexity isn't what people mean when they say "Functional languages are slow". Usually they are talking about constant overhead, which is very different. We don't have as good of ways to mathematically model this, so we usually just end up using experiments to measure precise performance.

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top