Question

I'm trying to understand the complexity of the Kruskal algorithm implemented with the Quick-Union by rank and with the path compression.

Now there is a theorem for the last structure above:

The complexity of any sequence of m operations of Makeset, Union and Find, n of which are of Makeset, on a Quick Union by rank and path compression is in the worst case equal to $O(m\,\, G(n))$. Where $G(n)$ is the inverse of the Ackerman function


And this one is the Kruskal algorithm in pseudocode. Let $G=(V,E)$ be an undirected, connected and weighted graph, where $V$ is the set of vertices and $E$ the set of edges, then the following algorithm returns a Set $A$ that contains the edges of the Minimum Spanning Tree.

Kruskal(Graph G) {
    Set A                          

    For every vertex v in V {
         Makeset(v)
    }

    Order the edges in E in ascending order by weight

    For every edge (u,v) {
        If find(u) != find(v) {
            union(u,v)
            A = A + (u,v)
        }
    }

    Return A
}

Let's go back to my problem.
I understand that the second loop is executed at the worst $m$ times (the number of edges). Hence the number of operations of union and find is $m$, while the number of makeset is $n$ (the number of vertices).

The book where I'm studying, after making similar observations says that the complexity of the above operations equals to $O((m+n)\,\,G(n))$ and I don't get why. Of course, the other complexity to order the edges is $O(m\,\, log\, m)$ so the final one is:
$O(m\,\, log\, m) + O((m+n)\,\,G(n))$
But $m>n-1$ then
$=O(m\,\, log\, m) + O((m)\,\,G(n))$
However $m<n^2 \implies log\, m < 2log\, n \implies log(m)=O(log n)$, so
$=O(m\,\, log\, n) + O((m)\,\,G(n))$
$=O(m\,\, log\, n)$

Can you explain to me how to get the $O((m+n)\,\,G(n))$ complexity from the algorithm?

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top