Question

So I thought this (though somewhat basic) question belonged here:

Say I have a graph of size 100 nodes arrayed in a 10x10 pattern (think chessboard). The graph is undirected, and unweighted. Moving through the graph involves moving three spaces forward and one space to either right or left (similar to how a chess knight moves across a board).

Given a fixed beginning node, how would one find the shortest path to any other node on the board?

I imagined that there would only be an edge between nodes that are viable moves. So, given this information, I would want to find the shortest path from a starting node to an ending node.

My initial thought was that each edge is weighted with weight 1. However, the graph is undirected, so Djikstras would not be an ideal fit. Therefore, I decided to do it using an altered form of a depth first search.

However, I couldn't for the life of me visualize how to get the shortest path using the search.

Another thing I tried was putting the graph in tree form with the starting node as the root, and then selecting the shallowest (lowest row number) result that gave me the desired end node... this worked, but was incredibly inefficient, and thus would not work for a larger graph.

Does anyone have any ideas that might point me in the right direction on this one?

Thank you very much.

(I tried to put in a visualization of the graph, but was unable to due to my low reputation)

Was it helpful?

Solution

If the edges in the graph only represent valid moves between certain positions, using Dijkstra's would work just fine. However as the graph is unweighted it would be overkill. A simple breadth-first-search will give the optimal answer.

OTHER TIPS

Nicholas already provided a perfect answer. However, let me address your original attempt to use depth-first search.

First, either Dijkstra (which works fine with unweighted nodes as noted by Nicholas Mancuso) or breadth-first search incur in exponential waste of your memory. Their advantage, however, is that they never re-expand any nodes while they are guaranteed to find optimal solutions. Unfortunately, their limitation is quite important and they should not be expected to scale up reasonably.

If you want to solve large instances of your problem use Iterative-Deepening Depth-First search (IDFS). Just issue a depth-first search from your start state with a maximum depth set to a specific threshold, $d_{max}$. If you have not found the solution, then increment the depth of the last iteration by a fixed constant $k$. Thus, in the $i$-th iteration, a depth-first search is launched at depth $d_{max} + i\times k$ (with the first iteration being numbered 0). If $d_{max}=k=1$ then you are guaranteed to find the optimal solution while using memory linear in the depth of the solution.

Well, you might be thinking that re-expanding nodes is a rather bad idea. Not at all! This is what guarantees a linear consumption of memory while the iteration dominating the overall running time is just the last one so that it can be proven that this algorithm incurs in an overhead of $\frac{b}{b-1}$ with $b$ being the effective branching factor, and this is clearly a very small penalty worth taking into account when facing hard problems.

Cheers,

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top