Question

In school, I have been learning about runtime and writing more efficient algorithms using tail recursion and the like, and a little while back an assignment asked us to consider the function for calculating powers;

(define (exp x n)
    (if (zero? n) 1 (* x (exp x (- n 1)))))

and we were tasked with writing an exp function with a runtime O(log n), so this was my answer:

(define (exp x n)
    (cond
        ((zero? n) 1)
        ((= 1 n) x)
        ((even? n) (exp (* x x) (/ n 2)))
        (else (* x (exp (* x x) (/ (- n 1) 2))))))

which simply comes from x^2n = (x^2)^n and x^2n+1 = x*(x^2)^n.

So I have been trying to think of a way to implement tail recursion to even further optimize this function, but I can't really think of a way to do this.Back to my question, Is there any sort of rule of thumb to know when you can write a polynomial runtime algorithm as a logarithmic runtime?

I ask this, because, as easy as it was to write this in such a way that its runtime is logarithmic, I never would have thought to do it without having been specifically asked to do so.

Was it helpful?

Solution

Regarding the first part of your question: it's relatively simple to turn the procedure into tail-recursive form, we just have to pass an additional parameter to accumulate the answer. To avoid creating an additional procedure I'll use a named let:

(define (exp x n)
  (let loop ([x x] [n n] [acc 1])
    (cond
      ((zero? n) acc)
      ((= n 1) (* x acc))
      ((even? n) (loop (* x x) (/ n 2) acc))
      (else (loop (* x x) (/ (- n 1) 2) (* x acc))))))

And for the second question: the rule of thumb would be - if there's a way to halve the size of a problem at some point when making the recursive call (in such a way that the result can be computed accordingly), then that's a good sign that it might exist a logarithmic solution for it. Of course, that's not always so obvious.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top