Question

Définition de "Temps constant d'amortissement" quand on parle de la complexité temporelle d'un algorithme?

Était-ce utile?

La solution

Temps amorti expliqué en termes simples:

Si vous effectuez une opération un million de fois, vous ne vous souciez pas vraiment du cas le plus défavorable ni du meilleur des cas. Ce qui vous intéresse, c'est de savoir combien de temps il faut en tout pour répéter l'opération. un million de fois.

Donc, peu importe si l'opération est très lente de temps en temps, à condition que "de temps en temps" est assez rare pour que la lenteur soit diluée. Le terme "temps amorti" désigne essentiellement le "temps moyen pris par opération, si vous effectuez plusieurs opérations". Le temps amorti ne doit pas nécessairement être constant. vous pouvez avoir un temps amorti linéaire et logarithmique ou autre chose.

Prenons l'exemple de mats d'un tableau dynamique, auquel vous ajoutez de manière répétée de nouveaux éléments. L'ajout d'un élément prend normalement un temps constant (c'est-à-dire, O (1) ). Mais chaque fois que le tableau est plein, vous allouez deux fois plus d'espace, copiez vos données dans la nouvelle région et libérez l'ancien. En supposant que les allocations et les libérations soient exécutées à temps constant, ce processus d’agrandissement prend O (n) , où n est la taille actuelle du tableau.

Ainsi, chaque fois que vous agrandissez, vous prenez environ deux fois plus de temps que le dernier agrandissement. Mais vous avez également attendu deux fois plus longtemps avant de le faire! Le coût de chaque agrandissement peut donc être "réparti". parmi les insertions. Cela signifie qu'à long terme, le temps total nécessaire à l'ajout d'éléments m au tableau est O (m) , et donc au temps amorti (c.-à-d. Le temps par insertion). est O (1) .

Autres conseils

Cela signifie qu'avec le temps, le pire des scénarios sera par défaut à O (1), ou à temps constant. Un exemple courant est le tableau dynamique. Si nous avons déjà alloué de la mémoire pour une nouvelle entrée, l'ajouter sera O (1). Si nous ne l'avons pas alloué, nous le ferons en allouant, disons, le double du montant actuel. Cette insertion particulière ne sera pas être O (1), mais plutôt quelque chose d'autre.

Ce qui est important, c’est que l’algorithme garantisse qu’après une séquence d’opérations, les opérations coûteuses seront amorties, ce qui rendra l’opération complète O (1).

Ou en termes plus stricts,

  

Il y a une constante c, telle que pour    chaque séquence d'opérations (également une se terminant par une opération coûteuse) de   longueur L, le temps n’est pas supérieur à   c * L (Merci Rafal Dowgird )

Pour développer une façon intuitive de penser à ce sujet, envisagez l’insertion d’éléments dans le tableau dynamique ( Par exemple, std :: vector en C ++). Traçons un graphique qui montre la dépendance du nombre d'opérations (Y) nécessaires pour insérer N éléments dans un tableau:

 plot

Les parties verticales du graphique noir correspondent à des réallocations de mémoire afin d’agrandir un tableau. Nous pouvons voir ici que cette dépendance peut être représentée grossièrement par une ligne. Et cette équation de ligne est Y = C * N + b ( C est constant, b = 0 dans notre cas). Par conséquent, nous pouvons dire que nous devons passer en moyenne des opérations C * N pour ajouter N éléments à un tableau ou des opérations C * 1 pour ajouter un élément (temps constant amorti) .

I found below Wikipedia explanation useful, after repeat reading for 3 times:

Source: https://en.wikipedia.org/wiki/Amortized_analysis#Dynamic_Array

"Dynamic Array

Amortized Analysis of the Push operation for a Dynamic Array

Consider a dynamic array that grows in size as more elements are added to it such as an ArrayList in Java. If we started out with a dynamic array of size 4, it would take constant time to push four elements onto it. Yet pushing a fifth element onto that array would take longer as the array would have to create a new array of double the current size (8), copy the old elements onto the new array, and then add the new element. The next three push operations would similarly take constant time, and then the subsequent addition would require another slow doubling of the array size.

In general if we consider an arbitrary number of pushes n to an array of size n, we notice that push operations take constant time except for the last one which takes O(n) time to perform the size doubling operation. Since there were n operations total we can take the average of this and find that for pushing elements onto the dynamic array takes: O(n/n)=O(1), constant time."

To my understanding as a simple story:

Assume you have a lot of money. And, you want to stack them up in a room. And, you have long hands and legs, as much long as you need now or in future. And, you have to fill all in one room, so it is easy to lock it.

So, you go right to the end/ corner of the room and start stacking them. As you stack them, slowly the room will run out of space. However, as you fill it was easy to stack them. Got the money, put the money. Easy. It's O(1). We don't need to move any previous money.

Once room runs out of space. We need another room, which is bigger. Here there is a problem, since we can have only 1 room so we can have only 1 lock, we need to move all the existing money in that room into the new bigger room. So, move all money, from small room, to bigger room. That is, stack all of them again. So, We DO need to move all the previous money. So, it is O(N). (assuming N is total count of money of previous money)

In other words, it was easy till N, only 1 operation, but when we need to move to a bigger room, we did N operations. So, in other words, if we average out, it is 1 insert in the begin, and 1 more move while moving to another room. Total of 2 operations, one insert, one move.

Assuming N is large like 1 million even in the small room, the 2 operations compared to N (1 million) is not really a comparable number, so it is considered constant or O(1).

Assuming when we do all the above in another bigger room, and again need to move. It is still the same. say, N2 (say, 1 billion) is new amount of count of money in the bigger room

So, we have N2 (which includes N of previous since we move all from small to bigger room)

We still need only 2 operations, one is insert into bigger room, then another move operation to move to a even bigger room.

So, even for N2 (1 billion), it is 2 operations for each. which is nothing again. So, it is constant, or O(1)

So, as the N increases from N to N2, or other, it does not matter much. It is still constant, or O(1) operations required for each of the N .


Now assume, you have N as 1, very small, the count of money is small, and you have a very small room, which will fit only 1 count of money.

As soon as you fill the money in the room, the room is filled.

When you go to the bigger room, assume it can only fit one more money in it, total of 2 count of money. That means, the previous moved money and 1 more. And again it is filled.

This way, the N is growing slowly, and it is no more constant O(1), since we are moving all money from previous room, but can only fit only 1 more money.

After 100 times, the new room fits 100 count of money from previous and 1 more money it can accommodate. This is O(N), since O(N+1) is O(N), that is, the degree of 100 or 101 is same, both are hundreds, as opposed to previous story of, ones to millions and ones to billions.

So, this is inefficient way of allocating rooms (or memory/ RAM) for our money (variables).


So, a good way is allocating more room, with powers of 2.

1st room size = fits 1 count of money
2nd room size = fits 4 count of money
3rd room size = fits 8 count of money
4th room size = fits 16 count of money
5th room size = fits 32 count of money
6th room size = fits 64 count of money
7th room size = fits 128 count of money
8th room size = fits 256 count of money
9th room size = fits 512 count of money
10th room size= fits 1024 count of money
11th room size= fits 2,048 count of money
...
16th room size= fits 65,536 count of money
...
32th room size= fits 4,294,967,296 count of money
...
64th room size= fits 18,446,744,073,709,551,616 count of money

Why is this better? Because it looks to grow slowly in the begin, and faster later, that is, compared to the amount of memory in our RAM.

This is helpful because, in the first case though it is good, total amount of work to be done per money is fixed (2) and not comparable to room size (N), the room that we took in the initial stages might be too big (1 million) that we may not use fully depending on if we may get that much money to save at all in first case.

However, in the last case, powers of 2, it grows in the limits of our RAM. And so, increasing in powers of 2, both the Armotized analysis remains constant and it is friendly for the limited RAM that we have as of today.

The explanations above apply to Aggregate Analysis, the idea of taking "an average" over multiple operations. I am not sure how they apply to Bankers-method or the Physicists Methods of Amortized analysis.

Now. I am not exactly sure of the correct answer. But it would have to do with the principle condition of the both Physicists+Banker's methods:

(Sum of amortized-cost of operations) >= (Sum of actual-cost of operations).

The chief difficulty that I face is that given that Amortized-asymptotic costs of operations differ from the normal-asymptotic-cost, I am not sure how to rate the significance of amortized-costs.

That is when somebody gives my an amortized-cost, I know its not the same as normal-asymptotic cost What conclusions am I to draw from the amortized-cost then?

Since we have the case of some operations being overcharged while other operations are undercharged, one hypothesis could be that quoting amortized-costs of individual operations would be meaningless.

For eg: For a fibonacci heap, quoting amortized cost of just Decreasing-Key to be O(1) is meaningless since costs are reduced by "work done by earlier operations in increasing potential of the heap."

OR

We could have another hypothesis that reasons about the amortized-costs as follows:

  1. I know that the expensive operation is going to preceded by MULTIPLE LOW-COST operations.

  2. For the sake of analysis, I am going to overcharge some low-cost operations, SUCH THAT THEIR ASYMPTOTIC-COST DOES NOT CHANGE.

  3. With these increased low-cost operations, I can PROVE THAT EXPENSIVE OPERATION has a SMALLER ASYMPTOTIC COST.

  4. Thus I have improved/decreased the ASYMPTOTIC-BOUND of the cost of n operations.

Thus amortized-cost analysis + amortized-cost-bounds are now applicable to only the expensive operations. The cheap operations have the same asymptotic-amortized-cost as their normal-asymptotic-cost.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top