Question

I have already looked into this related question Rehashing process in hashmap or hashtable. However I need to know how exactly the rehashing process is being performed. If for example the loadfactor is .75;

1- Does this mean that we forget about the existing hashTable and for every existing entry we get them one by one and reenter them into new set of buckets this time with a new hash function?

2- If that's so then what is the performance of rehashing, say when there are n entries exited. Is it O(n) amortized?

Était-ce utile?

La solution

Does this mean that we forget about the existing hashTable and for every existing entry we get them one by one and reenter them into new set of buckets this time with a new hash function?

Yes, that's what happens when the underlying array is considered too small. We have to recalculate the hash of each entry because the new hash table has a different compression function based on its size.

If that's so then what is the performance of rehashing, say when there are n entries exited. Is it O(n) amortized?

Yes, for every entry in the previous array, a new hash is calculated and the entry is re-entered. So O(n) amortized. Depending how the bucket linked list is implemented , the worst case could be O(n^2) since all the entries could possibly be entered in the same bucket.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top