The reason your algorithm runs with O(V^3)
time is because in each iteration you are going through the entire adjacency matrix, which takes O(V^2)
and performs some redundant actions.
Whenever you are adding a vertex to the spanning tree, there are at most |V-1|
new edges that may be added to the solution. At each iteration, you should only check if these edges changed the minimal weight to each of the other vertices.
The algorithm should look like:
1. Select a random vertex v, add v to S, assign an array A where A[i] = d{v, i}
2. While there are vertices that belong to G and not to S do:
2.1. Iterate through A, select the minimum value A[i], add vertex i to S
2.2. for each edge e={i, j} connected to vertex i do:
2.2.1. if d{i, j} < A[j] then A[j] = d{i ,j}
This way you are performing O(V)
actions for each vertex you add instead of O(V^2)
, and the overall running time is O(V^2)
Here's an edit to your code:
Select a random vertex x
Initialize DistanceTo // DistanceTo[i] = d{x, i}
Initialize Visited // Visited[i] = false if i!=x, Visited[x] = true
while(--no_of_vertices)
{
min_val = infinite
min_index = 1;
for(i from 1 to DistanceTo.size)
{
if (Visited[i] = false && DistanceTo[i] < min_val)
{
min_val = DistanceTo[i]
min_index = i
}
}
Visited[min_index] = true
For(i from 1 to Distance.size)
{
if (Visited[i] = false && d{min_index, i} < DistanceTo[i])
{
DistanceTo[i] = d{min_index, i}
}
}
print edge {min_index, i} was added to the MST
cost_of_mst += d{min_index, i}
}