Question

I've asked a similar question on cstheory.SE.

According to this answer on Stackoverflow there is an algorithm that on a non-lazy pure functional programming language has an $\Omega(n \log n)$ complexity, while the same algorithm in imperative programming is $\Omega(n)$. Adding lazyness to the FP language would make the algorithm $\Omega(n)$.

Is there any equivalent relationship comparing a FP language with and without Higher Order Functions? Is it still Turing Complete? If it is, does the lack of Higher Order on FP makes the language less "powerful" or efficient?

Was it helpful?

Solution

In a functional programming language that is powerful enough (for example, with data types to implement closures) you can eliminate all uses of higher order by the transformation of defunctionalization. Since this method is used to compile this kind of language, you can reasonably assume that this does not affect performances and that in this setting higher order does not make the language any less powerful. However it does affect how to write code.

However if the language is not powerful enough, then yes, higher order does provide expressive power. Consider the lambda-calculus: without any higher-order function, it really can't do anything, mostly because the most basic data types (integers, booleans) are implemented using functions.

In conclusion, it really depends on the language.


Above is my answer. Below, a comment about a usual assumption on imperative languages.

about an algorithm that on a non-lazy functional programming language has an $\Omega(n \log n)$ complexity, while the same algorithm in imperative programming is $\Omega(n)$. Adding lazyness to the FP language would make the algorithm $\Omega(n)$.

I would like to see this reference. The usual assumption is that an access to a array of length $n$ in a RAM is in time $O(1)$ and the equivalent in pure FP is in in time $O(\log n)$. That is not entirely true: the access time in a RAM is in $O(\log m)$ where $m$ is the size of the memory. Of course, $m≥n$. In practice accessing an element of an array is much faster. A reason would be that $m$ is bounded but... so is $n$!

EDIT: thank you for the link (the link for the paper about laziness is not available, here is another one). As posted in the comments and above in my answer, the model of RAM is a little unfair to pure FP by providing $O(1)$-time look-ups even when the size of one address is not bounded. I am yet to understand how the lazy trick works but I think it is important to note that this is only for this particular problem.

OTHER TIPS

It depends on what you mean by expressiveness.

Here is an argument that higher-order does add something: with first-order languages, primitive recursion is not enough to express the Ackermann function. However, in the presence of higher-order functions, primitive recursion is enough:

$$ \begin{array}{lcl} \textit{Ackermann} \ 0 & = & \lambda x. x+1 \\ \textit{Ackermann} \ ( m+1 ) & = & \textit{Iter}\ ( \textit{Ackermann}\ m ) \\ \textit{Iter}\ f\ 0 & = & f\ 1 \\ \textit{Iter}\ f\ ( n+1 ) & = & f\ ( \textit{Iter}\ f\ n ) \end{array} $$

This defines the Ackermann function using primitive recursion only.

Note that $ \textit{Iter}$ is not definable in conventional recursion theory, because $\textit{Iter}$ is higher order. In conventional recursion theory all definable functions have type $\mathbb{N}^k \rightarrow \mathbb{N}$ for some $k$, while $\textit{Iter}$ has type $(\mathbb{N} \rightarrow \mathbb{N}) \rightarrow (\mathbb{N} \rightarrow \mathbb{N})$.

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top