Question

Is there a performance hit if we use a loop instead of recursion or vice versa in algorithms where both can serve the same purpose? Eg: Check if the given string is a palindrome. I have seen many programmers using recursion as a means to show off when a simple iteration algorithm can fit the bill. Does the compiler play a vital role in deciding what to use?

Was it helpful?

Solution

It is possible that recursion will be more expensive, depending on if the recursive function is tail recursive (the last line is recursive call). Tail recursion should be recognized by the compiler and optimized to its iterative counterpart (while maintaining the concise, clear implementation you have in your code).

I would write the algorithm in the way that makes the most sense and is the clearest for the poor sucker (be it yourself or someone else) that has to maintain the code in a few months or years. If you run into performance issues, then profile your code, and then and only then look into optimizing by moving over to an iterative implementation. You may want to look into memoization and dynamic programming.

OTHER TIPS

Loops may achieve a performance gain for your program. Recursion may achieve a performance gain for your programmer. Choose which is more important in your situation!

Comparing recursion to iteration is like comparing a phillips head screwdriver to a flat head screwdriver. For the most part you could remove any phillips head screw with a flat head, but it would just be easier if you used the screwdriver designed for that screw right?

Some algorithms just lend themselves to recursion because of the way they are designed (Fibonacci sequences, traversing a tree like structure, etc.). Recursion makes the algorithm more succinct and easier to understand (therefore shareable and reusable).

Also, some recursive algorithms use "Lazy Evaluation" which makes them more efficient than their iterative brothers. This means that they only do the expensive calculations at the time they are needed rather than each time the loop runs.

That should be enough to get you started. I'll dig up some articles and examples for you too.

Link 1: Haskel vs PHP (Recursion vs Iteration)

Here is an example where the programmer had to process a large data set using PHP. He shows how easy it would have been to deal with in Haskel using recursion, but since PHP had no easy way to accomplish the same method, he was forced to use iteration to get the result.

http://blog.webspecies.co.uk/2011-05-31/lazy-evaluation-with-php.html

Link 2: Mastering Recursion

Most of recursion's bad reputation comes from the high costs and inefficiency in imperative languages. The author of this article talks about how to optimize recursive algorithms to make them faster and more efficient. He also goes over how to convert a traditional loop into a recursive function and the benefits of using tail-end recursion. His closing words really summed up some of my key points I think:

"recursive programming gives the programmer a better way of organizing code in a way that is both maintainable and logically consistent."

https://developer.ibm.com/articles/l-recurs/

Link 3: Is recursion ever faster than looping? (Answer)

Here is a link to an answer for a stackoverflow question that is similar to yours. The author points out that a lot of the benchmarks associated with either recursing or looping are very language specific. Imperative languages are typically faster using a loop and slower with recursion and vice-versa for functional languages. I guess the main point to take from this link is that it is very difficult to answer the question in a language agnostic / situation blind sense.

Is recursion ever faster than looping?

Recursion is more costly in memory, as each recursive call generally requires a memory address to be pushed to the stack - so that later the program could return to that point.

Still, there are many cases in which recursion is a lot more natural and readable than loops - like when working with trees. In these cases I would recommend sticking to recursion.

Typically, one would expect the performance penalty to lie in the other direction. Recursive calls can lead to the construction of extra stack frames; the penalty for this varies. Also, in some languages like Python (more correctly, in some implementations of some languages...), you can run into stack limits rather easily for tasks you might specify recursively, such as finding the maximum value in a tree data structure. In these cases, you really want to stick with loops.

Writing good recursive functions can reduce the performance penalty somewhat, assuming you have a compiler that optimizes tail recursions, etc. (Also double check to make sure that the function really is tail recursive---it's one of those things that many people make mistakes on.)

Apart from "edge" cases (high performance computing, very large recursion depth, etc.), it's preferable to adopt the approach that most clearly expresses your intent, is well-designed, and is maintainable. Optimize only after identifying a need.

Recursion is better than iteration for problems that can be broken down into multiple, smaller pieces.

For example, to make a recursive Fibonnaci algorithm, you break down fib(n) into fib(n-1) and fib(n-2) and compute both parts. Iteration only allows you to repeat a single function over and over again.

However, Fibonacci is actually a broken example and I think iteration is actually more efficient. Notice that fib(n) = fib(n-1) + fib(n-2) and fib(n-1) = fib(n-2) + fib(n-3). fib(n-1) gets calculated twice!

A better example is a recursive algorithm for a tree. The problem of analyzing the parent node can be broken down into multiple smaller problems of analyzing each child node. Unlike the Fibonacci example, the smaller problems are independent of each other.

So yeah - recursion is better than iteration for problems that can be broken down into multiple, smaller, independent, similar problems.

Your performance deteriorates when using recursion because calling a method, in any language, implies a lot of preparation: the calling code posts a return address, call parameters, some other context information such as processor registers might be saved somewhere, and at return time the called method posts a return value which is then retrieved by the caller, and any context information that was previously saved will be restored. the performance diff between an iterative and a recursive approach lies in the time these operations take.

From an implementation point of view, you really start noticing the difference when the time it takes to handle the calling context is comparable to the time it takes for your method to execute. If your recursive method takes longer to execute then the calling context management part, go the recursive way as the code is generally more readable and easy to understand and you won't notice the performance loss. Otherwise go iterative for efficiency reasons.

I believe tail recursion in java is not currently optimized. The details are sprinkled throughout this discussion on LtU and the associated links. It may be a feature in the upcoming version 7, but apparently it presents certain difficulties when combined with Stack Inspection since certain frames would be missing. Stack Inspection has been used to implement their fine-grained security model since Java 2.

http://lambda-the-ultimate.org/node/1333

There are many cases where it gives a much more elegant solution over the iterative method, the common example being traversal of a binary tree, so it isn't necessarily more difficult to maintain. In general, iterative versions are usually a bit faster (and during optimization may well replace a recursive version), but recursive versions are simpler to comprehend and implement correctly.

Recursion is very useful is some situations. For example consider the code for finding the factorial

int factorial ( int input )
{
  int x, fact = 1;
  for ( x = input; x > 1; x--)
     fact *= x;
  return fact;
}

Now consider it by using the recursive function

int factorial ( int input )
{
  if (input == 0)
  {
     return 1;
  }
  return input * factorial(input - 1);
}

By observing these two, we can see that recursion is easy to understand. But if it is not used with care it can be so much error prone too. Suppose if we miss if (input == 0), then the code will be executed for some time and ends with usually a stack overflow.

In many cases recursion is faster because of caching, which improves performance. For example, here is an iterative version of merge sort using the traditional merge routine. It will run slower than the recursive implementation because of caching improved performances.

Iterative implementation

public static void sort(Comparable[] a)
{
    int N = a.length;
    aux = new Comparable[N];
    for (int sz = 1; sz < N; sz = sz+sz)
        for (int lo = 0; lo < N-sz; lo += sz+sz)
            merge(a, lo, lo+sz-1, Math.min(lo+sz+sz-1, N-1));
}

Recursive implementation

private static void sort(Comparable[] a, Comparable[] aux, int lo, int hi)
{
    if (hi <= lo) return;
    int mid = lo + (hi - lo) / 2;
    sort(a, aux, lo, mid);
    sort(a, aux, mid+1, hi);
    merge(a, aux, lo, mid, hi);
}

PS - this is what was told by Professor Kevin Wayne (Princeton University) on the course on algorithms presented on Coursera.

Using recursion, you're incurring the cost of a function call with each "iteration", whereas with a loop, the only thing you usually pay is an increment/decrement. So, if the code for the loop isn't much more complicated than the code for the recursive solution, loop will usually be superior to recursion.

Recursion and iteration depends on the business logic that you want to implement, though in most of the cases it can be used interchangeably. Most developers go for recursion because it is easier to understand.

It depends on the language. In Java you should use loops. Functional languages optimize recursion.

If you're just iterating over a list, then sure, iterate away.

A couple of other answers have mentioned (depth-first) tree traversal. It really is such a great example, because it's a very common thing to do to a very common data structure. Recursion is extremely intuitive for this problem.

Check out the "find" methods here: http://penguin.ewu.edu/cscd300/Topic/BSTintro/index.html

Recursion is more simple (and thus - more fundamental) than any possible definition of an iteration. You can define a Turing-complete system with only a pair of combinators (yes, even a recursion itself is a derivative notion in such a system). Lambda calculus is an equally powerful fundamental system, featuring recursive functions. But if you want to define an iteration properly, you'd need much more primitives to start with.

As for the code - no, recursive code is in fact much easier to understand and to maintain than a purely iterative one, since most data structures are recursive. Of course, in order to get it right one would need a language with a support for high order functions and closures, at least - to get all the standard combinators and iterators in a neat way. In C++, of course, complicated recursive solutions can look a bit ugly, unless you're a hardcore user of FC++ and alike.

I would think in (non tail) recursion there would be a performance hit for allocating a new stack etc every time the function is called (dependent on language of course).

it depends on "recursion depth". it depends on how much the function call overhead will influence the total execution time.

For example, calculating the classical factorial in a recursive way is very inefficient due to: - risk of data overflowing - risk of stack overflowing - function call overhead occupy 80% of execution time

while developing a min-max algorithm for position analysis in the game of chess that will analyze subsequent N moves can be implemented in recursion over the "analysis depth" (as I'm doing ^_^)

Recursion? Where do I start, wiki will tell you “it’s the process of repeating items in a self-similar way"

Back in day when I was doing C, C++ recursion was a god send, stuff like "Tail recursion". You'll also find many sorting algorithms use recursion. Quick sort example: http://alienryderflex.com/quicksort/

Recursion is like any other algorithm useful for a specific problem. Perhaps you mightn't find a use straight away or often but there will be problem you’ll be glad it’s available.

In C++ if the recursive function is a templated one, then the compiler has more chance to optimize it, as all the type deduction and function instantiations will occur in compile time. Modern compilers can also inline the function if possible. So if one uses optimization flags like -O3 or -O2 in g++, then recursions may have the chance to be faster than iterations. In iterative codes, the compiler gets less chance to optimize it, as it is already in the more or less optimal state (if written well enough).

In my case, I was trying to implement matrix exponentiation by squaring using Armadillo matrix objects, in both recursive and iterative way. The algorithm can be found here... https://en.wikipedia.org/wiki/Exponentiation_by_squaring. My functions were templated and I have calculated 1,000,000 12x12 matrices raised to the power 10. I got the following result:

iterative + optimisation flag -O3 -> 2.79.. sec
recursive + optimisation flag -O3 -> 1.32.. sec

iterative + No-optimisation flag  -> 2.83.. sec
recursive + No-optimisation flag  -> 4.15.. sec

These results have been obtained using gcc-4.8 with c++11 flag (-std=c++11) and Armadillo 6.1 with Intel mkl. Intel compiler also shows similar results.

Mike is correct. Tail recursion is not optimized out by the Java compiler or the JVM. You will always get a stack overflow with something like this:

int count(int i) {
  return i >= 100000000 ? i : count(i+1);
}

You have to keep in mind that utilizing too deep recursion you will run into Stack Overflow, depending on allowed stack size. To prevent this make sure to provide some base case which ends you recursion.

Recursion has a disadvantage that the algorithm that you write using recursion has O(n) space complexity. While iterative aproach have a space complexity of O(1).This is the advantange of using iteration over recursion. Then why do we use recursion?

See below.

Sometimes it is easier to write an algorithm using recursion while it's slightly tougher to write the same algorithm using iteration.In this case if you opt to follow the iteration approach you would have to handle stack yourself.

As far as I know, Perl does not optimize tail-recursive calls, but you can fake it.

sub f{
  my($l,$r) = @_;

  if( $l >= $r ){
    return $l;
  } else {

    # return f( $l+1, $r );

    @_ = ( $l+1, $r );
    goto &f;

  }
}

When first called it will allocate space on the stack. Then it will change its arguments, and restart the subroutine, without adding anything more to the stack. It will therefore pretend that it never called its self, changing it into an iterative process.

Note that there is no "my @_;" or "local @_;", if you did it would no longer work.

Using just Chrome 45.0.2454.85 m, recursion seems to be a nice amount faster.

Here is the code:

(function recursionVsForLoop(global) {
    "use strict";

    // Perf test
    function perfTest() {}

    perfTest.prototype.do = function(ns, fn) {
        console.time(ns);
        fn();
        console.timeEnd(ns);
    };

    // Recursion method
    (function recur() {
        var count = 0;
        global.recurFn = function recurFn(fn, cycles) {
            fn();
            count = count + 1;
            if (count !== cycles) recurFn(fn, cycles);
        };
    })();

    // Looped method
    function loopFn(fn, cycles) {
        for (var i = 0; i < cycles; i++) {
            fn();
        }
    }

    // Tests
    var curTest = new perfTest(),
        testsToRun = 100;

    curTest.do('recursion', function() {
        recurFn(function() {
            console.log('a recur run.');
        }, testsToRun);
    });

    curTest.do('loop', function() {
        loopFn(function() {
            console.log('a loop run.');
        }, testsToRun);
    });

})(window);

RESULTS

// 100 runs using standard for loop

100x for loop run. Time to complete: 7.683ms

// 100 runs using functional recursive approach w/ tail recursion

100x recursion run. Time to complete: 4.841ms

In the screenshot below, recursion wins again by a bigger margin when run at 300 cycles per test

Recursion wins again!

If the iterations are atomic and orders of magnitude more expensive than pushing a new stack frame and creating a new thread and you have multiple cores and your runtime environment can use all of them, then a recursive approach could yield a huge performance boost when combined with multithreading. If the average number of iterations is not predictable then it might be a good idea to use a thread pool which will control thread allocation and prevent your process from creating too many threads and hogging the system.

For example, in some languages, there are recursive multithreaded merge sort implementations.

But again, multithreading can be used with looping rather than recursion, so how well this combination will work depends on more factors including the OS and its thread allocation mechanism.

I'm going to answer your question by designing a Haskell data structure by "induction", which is a sort of "dual" to recursion. And then I will show how this duality leads to nice things.

We introduce a type for a simple tree:

data Tree a = Branch (Tree a) (Tree a)
            | Leaf a
            deriving (Eq)

We can read this definition as saying "A tree is a Branch (which contains two trees) or is a leaf (which contains a data value)". So the leaf is a sort of minimal case. If a tree isn't a leaf, then it must be a compound tree containing two trees. These are the only cases.

Let's make a tree:

example :: Tree Int
example = Branch (Leaf 1) 
                 (Branch (Leaf 2) 
                         (Leaf 3))

Now, let's suppose we want to add 1 to each value in the tree. We can do this by calling:

addOne :: Tree Int -> Tree Int
addOne (Branch a b) = Branch (addOne a) (addOne b)
addOne (Leaf a)     = Leaf (a + 1)

First, notice that this is in fact a recursive definition. It takes the data constructors Branch and Leaf as cases (and since Leaf is minimal and these are the only possible cases), we are sure that the function will terminate.

What would it take to write addOne in an iterative style? What will looping into an arbitrary number of branches look like?

Also, this kind of recursion can often be factored out, in terms of a "functor". We can make Trees into Functors by defining:

instance Functor Tree where fmap f (Leaf a)     = Leaf (f a)
                            fmap f (Branch a b) = Branch (fmap f a) (fmap f b)

and defining:

addOne' = fmap (+1)

We can factor out other recursion schemes, such as the catamorphism (or fold) for an algebraic data type. Using a catamorphism, we can write:

addOne'' = cata go where
           go (Leaf a) = Leaf (a + 1)
           go (Branch a b) = Branch a b

Stack overflow will only occur if you're programming in a language that doesn't have in built memory management.... Otherwise, make sure you have something in your function (or a function call, STDLbs, etc). Without recursion it would simply not be possible to have things like... Google or SQL, or any place one must efficiently sort through large data structures (classes) or databases.

Recursion is the way to go if you want to iterate through files, pretty sure that's how 'find * | ?grep *' works. Kinda dual recursion, especially with the pipe (but don't do a bunch of syscalls like so many like to do if it's anything you're going to put out there for others to use).

Higher level languages and even clang/cpp may implement it the same in the background.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top