Question

I was reading Intro to Algorithms, by Thomas H. Corman when I encountered this statement (in Asymptotic Notations)

when a>0, any linear function an+b is in O(n^2) which is essentially verified by taking c = a + |b| and no = max(1, -b/a)

I can't understand why O(n^2) and not O(n). When will O(n) upper bound fail.

For example, for 3n+2, according to the book

3n+2 <= (5)n^2    n>=1

but this also holds good

3n+2 <= 5n    n>=1

So why is the upper bound in terms of n^2?

Was it helpful?

Solution

Well I found the relevant part of the book. Indeed the excerpt comes from the chapter introducing big-O notation and relatives.

The formal definition of the big-O is that the function in question does not grow asymptotically faster than the comparison function. It does not say anything about whether the function grows asymptotically slower, so:

f(n) = n is in O(n), O(n^2) and also O(e^n) because n does not grow asymptotically faster than any of these. But n is not in O(1).

Any function in O(n) is also in O(n^2) and O(e^n).

If you want to describe the tight asymptotic bound, you would use the big-Θ notation, which is introduced just before the big-O notation in the book. f(n) ∊ Θ(g(n)) means that f(n) does not grow asymptotically faster than g(n) and the other way around. So f(n) ∊ Θ(g(n)) is equivalent to f(n) ∊ O(g(n)) and g(n) ∊ O(f(n)).

So f(n) = n is in Θ(n) but not in Θ(n^2) or Θ(e^n) or Θ(1).

Another example: f(n) = n^2 + 2 is in O(n^3) but not in Θ(n^3), it is in Θ(n^2).

You need to think of O(...) as a set (which is why the set theoretic "element-of"-symbol is used). O(g(n)) is the set of all functions that do not grow asymptotically faster than g(n), while Θ(g(n)) is the set of functions that neither grow asymptotically faster nor slower than g(n). So a logical consequence is that Θ(g(n)) is a subset of O(g(n)).

Often = is used instead of the symbol, which really is misleading. It is pure notation and does not share any properties with the actual =. For example 1 = O(1) and 2 = O(1), but not 1 = O(1) = 2. It would be better to avoid using = for the big-O notation. Nonetheless you will later see that the = notation is useful, for example if you want to express the complexity of rest terms, for example: f(n) = 2*n^3 + 1/2*n - sqrt(n) + 3 = 2*n^3 + O(n), meaning that asymptotically the function behaves like 2*n^3 and the neglected part does asymptotically not grow faster than n.

All of this is kind of against the typically usage of big-O notation. You often find the time/memory complexity of an algorithm defined by it, when really it should be defined by big-Θ notation. For example if you have an algorithm in O(n^2) and one in O(n), then the first one could actually still be asymptotically faster, because it might also be in Θ(1). The reason for this may sometimes be that a tight Θ-bound does not exist or is not known for given algorithm, so at least the big-O gives you a guarantee that things won't take longer than the given bound. By convention you always try to give the lowest known big-O bound, while this is not formally necessary.

OTHER TIPS

The formal definition (from Wikipedia) of the big O notation says that:

f(x) = O(g(x)) as x → ∞

if and only if there is a positive constant M such that for all sufficiently large values of x, f(x) is at most M multiplied by g(x) in absolute value. That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that

|f(x)|≤ M|g(x)| for all x > x₀ (mean for x big enough)

In our case, we can easily show that

|an + b| < |an + n| (for n sufficiently big, ie when n > b)

Then |an + b| < (a+1)|n|

Since a+1 is constant (corresponds to M in the formal definition), definitely

an + b = O(n)

Your were right to doubt.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top