Question

In most implementations of skip lists I've seen, they use a randomized algorithm to decide if an element must be copied into the upper level.

But I think using odd indexed elements at each level to have copies in the upper level will give us logarithmic search complexity. Why isn't this used?

E.g. : Data : 1 2 3 4 5 6 7 8 9

Skip List:

1--------------------

1--------------------9

1--------5----------9

1----3---5----7----9

1-2-3-4-5-6-7-8-9

Était-ce utile?

La solution

It is not used because it is easy to maintain while building the list from scratch - but what happens when you add/remove element to an existing list? Elements that used to be odd indexed are even indexed now, and vise versa.

In your example, assume you now add 3.5, all to maintain the DS as you described, it will require O(k + k/2 + k/4 + ... ) changes to the DS, where k is the number of elements AFTER the element you have just inserted.

This basically gives you O(n/2 + n/4 + ...) = O(n) add/remove complexity on average.

Autres conseils

Because if you start deleting nodes or inserting nodes in the middle the structure quickly requires rebalancing or it loses its logarithmic guarantees on access and update.

Actually there is a structure very similar to what you suggest, an interval tree, which gets around the update problem by not using actual elements as intermediate node labels. It can also require some care to get good performance.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top