Pergunta

Possible Duplicate:
Plain English explanation of Big O

I have always struggled to calculate the Big-O time and space complexity of the algorithms I write.

Can anybody please point to a good resource for studying more about space complexity of algorithms it.

EDIT: I had searched for tutorials before posting here. Unfortunately, all the tutorials focus on run time complexity and hardly write more than a few lines on space complexity.

Foi útil?

Solução

Depending on where you want to jump in, this may be a good toe-dipper. The wiki page is also high quality, and goes a little more in depth. This is a good upper-level undergraduate or intro graduate text, and will go into the linear speedup theorem, a large reason computer scientists use big-O notation at all when discussing algorithm runtimes. In a nutshell it says you can always get a linear factor in speed improvement by throwing an exponential amount of money at hardware.

The grace of Big-O notation is that it allows us to discard the "loose change" off of the ends of our cost formulas. This is justified by the implicit assumption that we only care about the limiting case where the size of our input goes to infinity, and the largest terms of our costs dominate the others.

Performing complexity analysis requires you to first choose a measure for your input, then decide what resource whose consumption you wish to measure, and then count the amount taken by the algorithm when run on input of a given size. By convention the input size is named N. Typical resources are number of "steps" executed or items stored in all containers, but these are only (popular) examples. For contrast, comparison-based sorting algorithms often focus exclusively on the number of swaps made.

The size of the input is generally not the only determining fact in how long the algorithm takes to run or how much space it needs. For example, insertion sort's run time is dramatically different between inputs of equal length presented in already-sorted and reverse-sorted order. This is why we talk about Worst-Case vs. Average-Case complexity (or best-case, etc.) By asking e.g "What's the worst possible thing that could happen?", we can decide how to step through the source and count usage.

Average-case complexities are tricky, as they require knowledge of the distribution of possible inputs. Worst-case complexities are independent of input distributions and provide us with hard upper bounds, which in practice are often all we need.

For example, if an algorithm such as Bubble Sort takes an array of items as input, a typical measure is the length of the array. Suppose we wish to count the number of swaps it makes in the worst case. Here's pseudo code for it, taken from Wikipedia:

procedure bubbleSort( A : list of sortable items )
  repeat
    swapped = false
    for i = 1 to length(A) - 1 inclusive do:
      if A[i-1] > A[i] then
        swap( A[i-1], A[i] )
        swapped = true
      end if
    end for
  until not swapped
end procedure

Notice it is essentially two for loops, an inner one nested inside the other. The inner loop counts from 1 to length(A) - 1, and makes the maximum N - 1 swaps exactly when the largest element of the array is at the front. The outer loop repeats this process so long as any swapping occurred last pass. Assuming a worst-case previous pass, the previously-largest unsorted element will be in place at the end of the list, effectively shrinking the distance we can move the next-largest unsorted element by one. So, each successive pass does one less swap, and we end up with

N + (N-1) + (N-2) + ... + 2 + 1 = N * (N + 1) / 2 = 1/2 * N^2 + N/2

In Big-O notation this becomes

O(1/2 * N^2 + N/2) = O(1/2 * N^2) = O(N^2)

Here, we drop the linear (N/2) term since it will be dominated by the quadratic one as N -> inf. Then we drop the 1/2 leading constant factor as it is essentially a hardware detail. Note this is a human motivation: the cleverness of big-O' is its definition provides a rigorous framework for housing our motivations. Turns out that framework says we drop the leading constant factors.

Crafting a rigorous proof of complexity is a skill in itself, and knowing definitions alone won't help you much with it. Proof by induction is usually applicable, where one establishes pre-conditions and post-conditions around each pass of a loop. Notice in my informal argument I take into account the previous iteration when reasoning about the current one: this is inductive thinking. "Discrete mathematics", "proof by induction", "combinatorics" and "counting" are all good keywords to look for. (Yes, "counting" is itself a branch of math and it is hard.)

Once you have proven a formula, "reducing" it in big-O is a different skill, and essentially requires knowing a little calculus (limits.) Eventually you will be able to prune away annoying branches in your proofs by establishing that the terms they introduce will be dominated by some other, known one.

Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top