Question

I'm having some trouble with the following algorithm:

for (int i = 1; i < n; i = 2i)
    for (int j = i; j < n; j++)
        // do something (const time)

So it wasn't too hard showing that runtime is O(nlogn) - but I'm not sure how to show that it's Big Omega(nlogn)! Intuitively, I see that it must be the case, since for a given n, time complexity doesn't vary between best / worst case.

Any suggestions would be very appreciated!

Was it helpful?

Solution 2

To see the intuition behind why this is O(nlogn), all we need to do is see how many times the inner loop executes.

As you have said, the outer loop executes logn times (easy to see, as you can only double i logn times before the inequality stops holding)

So for the inner loop, we can see it executes a different amount of steps each time i changes

The first iteration, it executes n-1 times (as i starts at 1)

The next iteration, n-2

The third iteration, n-4

We can see the pattern is (n-1) + (n-2) + (n-4) + (n-8) ...

Now, let's separate out the n's together. How many (n-k) additions do we have? We have logn of them by knowing thats how many times the outer loop executes.

So n + n + n + n ... - 1 - 2 - 4 - 8... can be thought of as nlogn - 1 - 2 - 4 - 8 ...

Now, how do we show this can be reduced even further? Well the total number of subtractions we have is logn subtractions (thanks to the outer loop), and we know the size of the subtraction keeps doubling each time. Another way to look at it is the sum from i = 0 to logn of 2^i.

Now a standard CS formula is sum of i = 0 to n of 2^i is 2^(n+1) - 1. This is used to show the max number of nodes in a complete binary tree, for example, and you can prove it by induction.

So we substitute logn for n in the formula, and we get nlogn - (2^(logn+1)- 1) . But we aren't done yet. We can show that 2^(logn+1) = 2*2^logn=2*n. So finally, we end up with nlogn - 2n + 1. At this point, set up your inequality and find a c2.

OTHER TIPS

In this algorithm, there is no best- or worst-execution path: given a value for n, the execution path is fixed. So the best and worst cases are the same.

A good rule of thumb is that, if the control flow of the algorithm is not data-driven, then the best and worst cases are the same.

An example of what I mean by data-driven would be an implementation of the quicksort algorithm in which the pivot is always chosen to be the first element of the array. Sometimes that first element will split the rest of the data perfectly (best case) and sometimes it will be a maximum or minimum (worst case).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top