Pergunta

I understood what amortized analysis does, but can anyone tell me what is the main purpose of this kind of analysis?

What I understood:

Let say we have 3 three operations a,b,c used 1,2 and 3 times to achieve d. Based on aggregate analysis a,b and c are used 2 times each. Is this correct?

I am trying to understand the advantages of this in CLRS but I am completely lost. For example in dynamic programming we save the answers to sub problems in tables which helps us reduce the running time(lets say from exponential to polynomial). But I am unable to get a complete picture of amortized analysis.

Foi útil?

Solução

Amortised analysis is a tool to get more useful results than "naive" worst-case analysis. Especially in the realm of advanced data structures, operations can be cheap most of the time but expensive in rare cases; worst-cases analysis yields only the latter case as characteristic of the data structure. Dynamic arrays, splay trees and some flavors of hash tables are among the more popular examples.

For many purposes, that is too pessimistic. We don't actually need that every operation is fast¹; we execute lots and lots of operations (e.g. during an algorithm) and we want the total runtime to be small. That is what amortised analysis looks at.

Be careful not to confuse amortised analysis with average-case analysis. The former is still considering the worst case; it only sums over time and spreads the cost evenly; you get for one operation its share of the worst-case total cost. On the other hand, average-case assumes a probability distribution over inputs and/or the sequence of operations, and yields the expected time per operation, which is (by definition) its share of the expected total cost.


  1. Real-time applications are a notable exception.

Outras dicas

Consider the std::vector container from the STL, together with it's appending operation, push_back(). This operation has amortized running time $O(1)$.

Why? std::vector implemented as an array which can dynamically grow, if more space is needed. The strategy it uses for resizing is simple: it doubles it's size whenever you try to push_back a new element to a vectorthat is full. After that resizing operation, appending elements can be done in constant time: they are just inserted into the array "behind" the last element.

So, if you are growing a std::vector object by calling push_back very often, fewer and fewer growing operations need to be done, because of that exponential growing strategy: Eventually, the likelihood that your insertion operation is running in constant time converges to one.

Long story short: you do amortized analysis to determine the "average" complexity of your algorithm, given the case that it performs "usually" well, but not always. Like e.g. inserting elements at the end of a std::vector.

Licenciado em: CC-BY-SA com atribuição
Não afiliado a cs.stackexchange
scroll top