Question

I recently made an initial comparison of the running time of Dijkstra's algorithm using two data structures, the Java-based PriorityQueue (based on a binary heap, if I'm not mistaken), and a Fibonacci heap. I used Java's currentTimeMillis() to make my calculations. The results I ended up with are quite interesting. This is the output for one of my testcases:

Running Dijkstra's with 8 nodes and 27 links
- Execution time with binary heap: 1 miliseconds
- Execution time with Fibonacci heap: 4 miliseconds

Admittedly, I'm short on data sets at the the moment, with the above graph being my largest (I plan to make more soon). But does this make any sense? I have always thought Fibonacci heaps were faster than other data structures due to their amortized running time compared to the other data structures. I'm not really sure where this 3-milisecond difference is coming from. (I'm running it on an Intel Core Ivy Bridge i7-3630M processor, if that helps.)

Note: I stumbled upon this thread which might explain the issue, though I'm still unclear as to why the Fibonacci heap version is taking longer. According to that thread, it could be because my graph is not dense enough and therefore the number of decrease-Key operations is not large enough for the performance of the Fibonacci heap to really shine. Would this be the only plausible conclusion, or is there something else I'm missing?

Was it helpful?

Solution

Fibonacci heaps are asymptotically faster than binary heaps (the data structure used in Java's priority queue), in that Dijkstra's algorithm will take O(m + n log n) time with a Fibonacci heap but O(m log n) with a binary heap. This means that for large, dense graphs, in the worst case, Fibonacci heaps will be faster.

Although Fibonacci heaps are asymptotically faster than binary heaps, they have notoriously large constant factors and many basic operations on Fibonacci heaps take a long time to complete. In the long run they'll outperform binary heaps, but for small graphs the constant terms might be so large that the Fibonacci heap is actually slower.

Second, compare the asymptotic runtimes (O(m + n log n) versus O(m log n)). If the graph you're using is sparse (that is, m = O(n)), then both of these asymptotic runtimes are the same (O(n log n)). In that case, the theoretical advantage of Fibonacci heaps isn't present and the binary heap might be the superior choice.

Finally, note that big-O notation refers to worst-case behavior in this case, rather than average-case. There was a paper a while back that showed that for random graphs of a certain type, that Dijkstra's algorithm on expectation takes far lower than the worst-case number of decrease-key and dequeue operations. In that case, a binary heap could outperform a Fibonacci heap even on large graphs, since the worst-case behavior miht never get triggered.

Hope this helps!

OTHER TIPS

Fibonacci heaps have faster asymptotics, but their constant factors aren't necessarily great. On ludicrously huge inputs over one million or so, they might be faster, but for small inputs binary heaps are likely to be noticeably faster.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top