Question

I am studying for an exam and i came across some problems i need to address - dealing with Base Cases:

I am converting from Code to a Recurrence Relation not the other way around

Example 1:

 if(n==1) return 0;

Now the recurrence relation to that piece of code is: T(1) = 0

How i got that? By looking at n==1, we see this is a comparison with a value > 0, which is doing some form of work so we set "T(1)" and the return 0; isn't doing any work so we say "=0"

 => T(1) = 0;

Example 2:

 if(n==0) return n+1*2;

Analyzing: n==0 means we aren't doing any work so T(0), but return n+1*2; is doing work so "=1"

 => T(0) = 1;

What i want to know is if this the correct way of analyzing a piece of code like that to come up with a recurrence relation base case?

I am unsure about these which i came up on my own to exhaust possibilities of base-cases:

Example 3: if(n==m-2) return n-1; //answer: T(1) = 1; ?
Example 4: if(n!=2) return n;     //answer: T(1) = 1; ?
Example 5: if(n/2==0) return 1;   //answer: T(1) = 1; ?
Example 6: if(n<2) return;        //answer: T(1) = 0; ?
Was it helpful?

Solution

It's hard to analyze base cases outside of the context of the code, so it might be helpful if you posted the entire function. However, I think your confusion is arising from the assumption that T(n) always represents "work". I'm guessing that you are taking a class on complexity and you have learned about recurrence relations as a method for expressing the complexity of a recursive function.

T(n) is just a function: you plug in a number n (usually a positive integer) and you get out a number T(n). Just like any other function, T(n) means nothing on its own. However, we often use a function with the notation T(n) to express the amount of time required by an algorithm to run on an input of size n. These are two separate concepts; (1) a function T(n) and the various ways to represent it, such as a recurrence relationship, and (2) the number of operations required to run an algorithm.

Let me give an example.

int factorial(n)
{
   if (n > 0)
       return n*factorial(n-1);
   else
       return 1;
}

Let's see if we can write some function F(n) that represents the output of the code. well, F(n) = n*F(n-1), with F(0) = 1. Why? Clearly from the code, the result of F(0) is 1. For any other value of n, the result is F(n) = n*F(n-1). That recurrence relation is a very convenient way to express the output of a recursive function. Of course, I could just as easily say that F(n) = n! (the factorial operator), which is also correct. That's a non-recurrence expression of the same function. Notice that I haven't said anything about the run time of the algorithm or how much "work" it is doing. I'm just writing a mathematical expression for what the code outputs.

Dealing with the run time of a function is a little trickier, since you have to decide what you mean by "work" or "an operation." Let's suppose that we don't count "return" as an operation, but we do count multiplication as an operation and we count a conditional (the if statement) as an operation. Under these assumptions, we can try to write a recurrence relation for a function T(n) that describes how much work is done when the input n is given to the function. (Later, I'll make a comment about why this is a poor question.) For n = 0, we have a conditional (the if statement) and a return, nothing else. So T(0) = 1. For any other n > 0, we have a conditional, a multiply, and however many operations are required to compute T(n-1). So the total for n is:

T(n) = 1 (conditional) + 1 (multiply) + T(n-1) = 2 + T(n-1),
T(0) = 1.

We could write T(n) as a recurrence relation: T(n) = 2 + T(n-1), T(0) = 1. Of course, this is also just the function T(n) = 1 + 2n. Again, I want to stress that these are two very different functions. F(n) is describing the output of the function when n is the input. T(n) is describing how much work is done when n is the input.

Now, the T(n) that I just described is bad in terms of complexity theory. The reason is that complexity theory isn't about describing how much work is required to compute functions that take only a single integer as an argument. In other words, we aren't looking at the work required for a function of the form F(n). We want something more general: how much work is required to perform an algorithm on an input of size n. For example, MergeSort is an algorithm for sorting a list of objects. It requires roughly nlog(n) operations to run MergeSort on a list of n items. Notice that MergeSort isn't doing anything with the number n, rather, it operates on a list of size n. In contrast, our factorial function F(n) isn't operating on an input of size n: presumably n is an integer type, so it is probably 32-bits or 64-bits or something, no matter its value. Or you can get picky and say that its size is the minimum number of bits to describe it. In any case, n is the input, not the size of the input.

When you are answering these questions, it is important to be very clear about whether they want a recurrence relation that describes the output of the function, or a recurrence relation that describes the run time of the function.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top