Fibonacci heaps are asymptotically faster than binary heaps (the data structure used in Java's priority queue), in that Dijkstra's algorithm will take O(m + n log n) time with a Fibonacci heap but O(m log n) with a binary heap. This means that for large, dense graphs, in the worst case, Fibonacci heaps will be faster.
Although Fibonacci heaps are asymptotically faster than binary heaps, they have notoriously large constant factors and many basic operations on Fibonacci heaps take a long time to complete. In the long run they'll outperform binary heaps, but for small graphs the constant terms might be so large that the Fibonacci heap is actually slower.
Second, compare the asymptotic runtimes (O(m + n log n) versus O(m log n)). If the graph you're using is sparse (that is, m = O(n)), then both of these asymptotic runtimes are the same (O(n log n)). In that case, the theoretical advantage of Fibonacci heaps isn't present and the binary heap might be the superior choice.
Finally, note that big-O notation refers to worst-case behavior in this case, rather than average-case. There was a paper a while back that showed that for random graphs of a certain type, that Dijkstra's algorithm on expectation takes far lower than the worst-case number of decrease-key and dequeue operations. In that case, a binary heap could outperform a Fibonacci heap even on large graphs, since the worst-case behavior miht never get triggered.
Hope this helps!