Question

I am a bit confused on how to determine the running time of graph algorithms. That is, determining an estimated running time using the number of vertices (n) and the number of edges (m). Does someone mind explaining to me where my logic is flawed, and explain how to properly analyze these and find the running times?

Here is an example graph algorithm:

//input: Directed graph G represented by adjacency lists.
Func1(G) {
k = 0;
foreach vertex vi in V(G) do
    foreach directed edge (vi, vj) incident on vi do
    k=k+1; //just some constant time operation
    end
end
return(k);
}

The running time of this graph algorithm is given as O(m+n) in BigO notation. Now here is my question:

Just looking at this algorithm for a quick mental analysis, why isn't the running time instead equal to O(n*m)? The outer loop runs n times, in other words once for each vertex. Now the inner loop runs once for each edge, m times. Thus I thought that both loops together would run n*m times.

One last less trivial question is, how would the algorithm running time change should the input be given as an adjacency matrix, rather than an adjacency list?

I cannot find a good resource for this online, there are no clear and concise examples covering this topic. Would really appreciate if someone can help me create one :)

Was it helpful?

Solution

It runs once for every edge incident to vi. As each edge is incident to 2 vertices, in the end, each edge is visited twice, and each vertex once. So O(n+2m) = O(n+m) using an adjacency list.

Using an adjacency matrix, to find out which edges are incident to vi, you would need O(n) operations. So the algorithm would be O(n²).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top