I'm having trouble with understanding the following property of divide-and-conquer algorithms.
A recursive method that divides a problem of size N
into two independent
(nonempty) parts that it solves recursively calls itself less than N
times.
The proof is
A recursive function that divides a problem of size N
into two independent
(nonempty) parts that it solves recursively calls itself less than N
times.
If the parts are one of size k
and one of size N-k
, then the total number of
recursive calls that we use is T(n) = T(k) + T(n-k) + 1
, for N>=1
with T(1) = 0
.
The solution T(N) = N-1
is immediate by induction. If the sizes sum to a value
less than N
, the proof that the number of calls is less than N-1
follows from
same inductive argument.
I perfectly understand the formal proof above. What I don't understand is how this property is connected to the examples that are usually used to demonstrate the divide-and-conquer idea, particularly to the finding the maximum problem:
static double max(double a[], int l, int r)
{
if (l == r) return a[l];
int m = (l+r)/2;
double u = max(a, l, m);
double v = max(a, m+1, r);
if (u > v) return u; else return v;
}
In this case when a consists of N=2
elements max(0,1)
will call itself 2 more times, that is max(0,0)
and max(1,1)
, which equals to N
. If N=4
, max(0,3)
will call itself 2 times, and then each of the subsequent calls will also call max 2 times, so the total number of calls is 6 > N
. What am I missing?