Question

I saw three different ways to write the recursive form of a Fibonacci function: With the math inline, With the math inline with result caching and one Using tail recursion with. I understand using memoization converts an O(N) algorithm to an O(1) after the answers are cached. But I fail to understand how tail call optimization can help this much. I was under the impression it might prevent a few copies or something like that. It is nearly as fast as the O(1). What is Ruby doing that makes this so fast?

Here is the slow naive implementation with math inline. It is clearly the slowest running O(N) time, and then when looped as in the display in O(N^2) time.

puts Benchmark.measure {
  # Calculate the nth Fibonacci number, f(n).
  def fibo (n)
    if n <= 1
      return n
    else
      value = fibo(n-1) + fibo(n-2)
      return value
    end
  end

  # Display the Fibonacci sequence.
  (1..40).each do |number|
    puts "fibo(#{number}) = #{fibo(number)}"
  end
}

Times Ruby 1.9.3: 55.989000 0.000000 55.989000 ( 55.990000)
Times JRuby 1.7.9: 51.629000 0.000000 51.629000 ( 51.629000)
source( http://rayhightower.com/blog/2014/04/12/recursion-and-memoization/?utm_source=rubyweekly )

Here is the version which memoizes answers, it is clear why this is fast to me. Once it has done the math any following request runs in O(1) time, so when it included in a loop it still runs in O(N) time in the worst case:

puts Benchmark.measure {
  # Fibonacci numbers WITH memoization.

  # Initialize the memoization array.
  @scratchpad = []
  @max_fibo_size = 50
  (1..@max_fibo_size).each do |i|
    @scratchpad[i] = :notcalculated
  end

  # Calculate the nth Fibonacci number, f(n).
  def fibo (n)
    if n > @max_fibo_size
      return "n must be #{@max_fibo_size} or less."
    elsif n <= 1
      return n
    elsif @scratchpad[n] != :notcalculated
      return @scratchpad[n]
    else
      @scratchpad[n] = fibo(n-1) + fibo(n-2)
      return @scratchpad[n]
    end
  end

  # Display the Fibonacci sequence.
  (1..40).each { |number|
    puts "fibo(#{number}) = #{fibo(number)}"
  }
}

Times Ruby 1.9.3: 0.000000 0.000000 0.000000 ( 0.025000)
Times JRuby 1.7.9: 0.027000 0.000000 0.027000 ( 0.028000)
Source( http://rayhightower.com/blog/2014/04/12/recursion-and-memoization/?utm_source=rubyweekly )

The version tail call recursion version of this runs pretty much instantly:

puts Benchmark.measure {
  # Calculate the nth Fibonacci number, f(n). Using invariants
  def fibo_tr(n, acc1, acc2)
    if n == 0
      0
    elsif n < 2
      acc2
    else
      return fibo_tr(n - 1, acc2, acc2 + acc1)
    end
  end

  def fibo (n)
    fibo_tr(n, 0, 1)
  end 

  # Display the Fibonacci sequence.
  (1..50).each do |number|
    puts "fibo(#{number}) = #{fibo(number)}"
  end
}

Times Ruby 1.9.3: 0.000000 0.000000 0.000000 ( 0.021000)
Times JRuby 1.7.9: 0.041000 0.000000 0.041000 ( 0.041000)
Source ( https://gist.github.com/mvidaurre/11006570 )

Was it helpful?

Solution

Tail recursion is not the difference here. In fact, Ruby does not do anything to optimize tail calls.

The difference is that the naive algorithm recursively calls itself twice each time it's called, giving O(2n) performance, which means the runtime goes up exponentially as N increases. The tail-call version runs in linear time.

OTHER TIPS

TL; DR: As Chuck already mentioned, Ruby doesn't have TCO. However doing one recursion and not two has big impact on how much stack you use and how many iterations are done. With this answer I just want to point out that at some time the memoization version is better than the iterative version. NB: I'm not a ruby programmer. It may not be idiomatic code.

The test shows the iterative approach is so fast it can generate fib of 1..50 from scratch just as fast as your memoization version reuses computation in each of the method calls above 3.

I think 1..50 is done so fast that it's not a very reliable to see if iteration actually is faster. I changed the memopization version to:

# Initialize the memoization array.
@scratchpad = []

# Calculate the nth Fibonacci number, f(n).
def fibo (n)
  if n <= 1 
    return n
  end
  if @scratchpad[n].nil?
    @scratchpad[n] = fibo(n-1) + fibo(n-2)
  end
  return @scratchpad[n]
end

I then changed the loop to this:

(1..5000).each { |number|
  fibo(number) # no need to time character output
}

Here are the results on my computer:

Iteration:   6.260000   0.010000   6.270000 (  6.273362)
Memoization: 0.000000   0.000000   0.000000 (  0.006943)

I used:

ruby -v
ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-linux]

Increasing the memoization version to 1..50000 still returned a lot faster than the iteration version. The reason is that the iteration starts from scratch each time while the memoization version has a more ineffective algo but the memoization makes it only recurse max two times for each number since we have fib(n-1) and fib(n-2) in the array when calculatingfib(n)`.

The slowest has O(fib(n)) of course. The iterative has O(n). With memoization the fib(n-2) is free when calculated fib(n-1) so we are back to O(n) but in your test you calculate the previous fibonacci number before the next so in practice each individual iteration from 1..x is O(1). If you were to start with the highest number the first iteration would be O(n) and every next would be O(1).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top