Question

Given a graph G=(V, E), a subset S that belongs to V, and the subset S' containing every vertex of G not belonging to S, I want to count the total number of edges between the nodes of S and S'.

An algorithm that could solve this problem with better complexity than O(n^2).

Was it helpful?

Solution

Assuming by "lower than O(n^2)" you mean something like O(|E|), then you can do that by using hashing structures. Put all nodes of S in a hashset, iterate over all edges of G and for each edge check whether both endpoints are in the hashset. Building the hashset is O(n), and assuming a sensible hashing function, processing all edges is O(|E|). If |E| in Omega(n^2), you can't do better than O(n^2).

EDIT: two things:

  • the last statement about not being able to do better if |E| in Omega(n^2) is wrong, depending on the representation you use for the graph. Let E' = {e = {s,v} in E | s in S} be the set of edges incident with at least one edge in S. If you have incidence/adjacency lists, then you can improve the complexity to O(|E'|) by only iterating over the edges incident with a node in S, and |E'| may be smaller than |E| by a non-constant factor depending on S.
  • the approach easily translates to finding the edges running between S and V\S. Just create two hashsets, put all nodes in S into the first and all nodes in V\S into the other. Then adjust the condition to only accept edges with one end node in the one hashset and the other end node in the other hashset. Depending on the sizes and densities of both induced subgraphs, it should pay off to only iterate over the edges incident with nodes in the set incident with fewer edges.
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top