Domanda

I'm trying to implement Prim's Algorithm and for that I need to have a decreaseKey method for a priority queue (to update the key value in a priority queue). Can I implement this in the STL Priority Queue?

If it helps, this is the algorithm I'm following:

  • for each vertex u in graph G
    • set key of u to INFINITY
    • set parent of u to NIL
  • set key of source vertex to 0
  • en-queue to priority queue Q all vertices in graph with keys as above
  • while Q is not empty
    • pop vertex u with lowest key in Q
    • for each adjacent vertex v of u do
      • if (v is still in Q) and (key(u) + weight-function(u, v) < key(v)) then
        • set u to be parent of v
        • update v's key to equal key(u) + weight-function(u, v) // This part is giving me problems as I don't know how implement decreaseKey in priority queue
È stato utile?

Soluzione

I do not think you can implement it in STL container. Remember you can always write your own heap(priority queue) based on vector, but there is a work around:

Keep array of distances you have, lets say d. In you priority queue you put pairs of distances and index of vertex of this distance. When you need to delete some value from queue, do not delete it, just update your value in d array and put new pair into queue.

Every time you take new value from queue, check if distance in pair is actually that good, as in your array d. If not ignore it.

Time is same O(MlogM). Memory is O(MlogM), where M is number of edges.

There is another approach: use RB-Tree, it can insert and delete keys in O(logN) and get minimum as well. You can find implementation in STL of RB-Tree in std::set container.

But, although time complexity is same, RB-Tree works slower and has bigger hidden constant, so it might be slightly slower, appx. 5 times slower. Depends on data, of course .

Altri suggerimenti

For the other approach : better than using a std::set. You may use a btree::btree_set (or btree::safe_btree_set). This is an implementation identical to std::set made by google using B-Tree unlike stl which use RB-Tree. This much better than std::set and also O(logN). check the performance comparison : http://code.google.com/p/cpp-btree/wiki/UsageInstructions And it has also a much lower memory footprint.

I'm no expert, so hope this is not too dumb, but would a vector combined with lower_bound work very well?

If you use lower_bound to find the correct place to insert new values, your vector will always be sorted as you build it, no sorting required. When your vector is sorted, isn't lower_bound a binary search with logarithmic class performance?

Since it is sorted, finding the min value (or max) is a snap.

To reduce key, you'd do a lower_bound search, delete, and do lower_bound again to insert the reduced key, which = 2 logarithmic class operations. Still not bad.

Alternatively, you could update the key and sort the vector. I would guess with random access, that should still be in the logarithmic class, but don't know exactly what stl does there.

With sorted vector, if you know the candidate key is less than the one that's in there, then maybe you could even just sort the part of the vector that has all the lesser values for even better performance.

Another consideration is I think sets/maps have quite a bit more memory overhead than vectors do?

I think most sorting is limited to NLogN, so 2 LogN for re-inserting rather than sorting might be better for the reduce key operation.

The other thing is inserting in vector is not so hot, however on the whole, does the idea of vector w lower_bound seem worth considering?

thanks

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top