Question

When using BFS search on an unweighted graph to find the single-source shortest paths, no relaxation step is required because a breadth-first search guarantees that when we first make it to a node, we have found the shortest path to it.

However, my understanding is that Dijkstra's algorithm has no such guarantee because the neighbours of each node are checked in no specific order. Therefore, we need a relaxation step to update the shortest path of each node if we later find a shorter path.

Instead of choosing a random neighbour, why not always follow the neighbour with the shortest path? I think that would guarantee we have always found the shortest path and no relaxation step is required. I implemented this in Python and it seems to work. Am I missing something?

from heapq import heappop, heappush

def shortest_path_lengths(graph, source):
    dist = {}
    q = [(0, source)]
    while q:
        source_to_v1_dist, v1 = heappop(q)
        if v1 in dist:
            continue
        dist[v1] = source_to_v1_dist
        for v2, v1_to_v2_dist in graph[v1].items():
            if v2 not in dist:  # check it hasn't been visited already
                heappush(q, (source_to_v1_dist + v1_to_v2_dist, v2))
    return dist


graph = {
    'a': {'b': 3, 'c': 5},
    'b': {'c': 1},
    'c': {'d': 4},
    'd': {},
}
print(shortest_path_lengths(graph, 'a'))
Was it helpful?

Solution

Congratulations! You have discovered an interesting variation of Dijkstra's algorithm.


However, it is misleading to state the standard version of Dijkstra's algorithm (stDA) described here as mentioned in the question does not use a shortest path first search. You might have overlooked the following statement in that article

We now need to pick a new current node. That node must be the unvisited node with the smallest minimum distance.

In fact, using a shortest path first search so that the weight of each edge is checked at most once is the hallmark of Dijkstra's algorithm. That is the main reason that your algorithm can be labelled as a variation of Dijkstra's algorithm.


While the neighbours of each node are selected in no specific order in stDA, the same unorderly selection is also done in your algorithm in the following lines

        for v2, v1_to_v2_dist in graph[v1].items():
            # code to record the distance to v2 via v1. 

Instead of including each vertex exactly once in the priority queue of unvisited vertex as in stDA, your algorithm pushes each vertex multiple times, once for every path that could possibly be the shortest path to that vertex at the moment that path is found. (This might make your algorithm slightly slower than stDA, since the number of operations to insert a new distance-vertex pair into the bigger min-heap in your algorithm will probably be more than the number of operations in the stDA needed to update the shortest distance so far to that vertex. For example, when the new distance is not shorter than the shortest distance so far.)


Instead of the separate relaxation step as in the stDA, your algorithm cleverly accomplished the object of the relaxation step in the same step that selects the shortest path to an unvisited vertex by also ignoring all other paths to visited vertices. It is fair to say your algorithm includes the relaxation step implicitly.

Overall, your algorithm is slightly easier to code than the stDA.

OTHER TIPS

Dijkstra's already does follow the shortest pah found so far to one of the remaining nodes at each step.

I think your confusion lies in the "enqueue each neighbor of the current node" step: we can't just follow the shortest one of these to get the next node because the next shortest path might not be to a neighbor of the current node.

Thus we enqueue it as a potential shortest path for future consideration, and if we later find an even shorter path to that same neighbor-node, we replace it.

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top