Question

I'm working on a problem to generate all powersets of a given set. The algorithm itself is relatively straightforward:

def power_sets(A):
    """Returns the power set of A as a list of lists"""

    if len(A) == 0:
        return [[]]

    else:
        ps_n1 = power_sets(A[:-1]) # Powerset of set without last element
        # Add nth element to each subset of the powerset of n-1 elements
        ps_n = [sub + [A[-1]] for sub in ps_n1]

        # Combine the two sets and return 
        return ps_n1 + ps_n

It's clear that the space complexity is $O(n2^n)$, since there are $n$ items in the set, and each element is in half of the $2^n$ sets.

However, the book that I'm working from says the time complexity is also $O(n2^n)$, which makes perfect intuitive sense, but I'm having a hard time justifying it mathematically.

Other than by saying "there are x number of items to mess with, so time complexity is at least as much", can anyone offer an explanation of the runtime analysis based on the runtime of the statements in my algorithm?

This answer pretty much only says that the runtime is such because of the space complexity (not very satisfying, but as an aside--can the runtime ever be better than the space complexity?)

I saw this answer, but to be honest it is a bit difficult to understand. It seems to suggest in the last line that the runtime is $O(n^2*2^{n!})$ (since (I think) $|P_i| = 2^i$), and that doesn't seem right either.

I tried drawing out the call tree and that didn't help, as there are only $n-1$ recursive calls made, and from what I can tell each spends $O(n^2)$ time.

Anyway, can someone offer a compelling explanation for this runtime?

Was it helpful?

Solution

Observe that:

$T(N) = 2T(N-1) + 2^{N-1}$

$T(1) = 2$

Note that the entire runtime, our algorithm only spends time generating more data. Therefore, our function T represents both the amount of time that the function spends running on an input of size N, and also the amount of data it generates from an input of size N.

We have $2T(N-1)$ because we first make the recursive call, and the following line makes a copy of all that data again in preparation to tack on the element we left out in the recursive call. We tack on exactly $2^{N-1}$ copies of the element we intentionally left out so that's where the last term comes from. Finally, T(1)=2 because with 1 element, we return 2 possible subsets (the entire set and the empty set).

Unrolling this recurrence we have:

$T(N)=2T(N-1)+2^{N-1}$

$\quad \quad =2(2(T(N-2)+2^{N-2})+2^{N-1}$

$\quad \quad =2(2(2(T(N-3)+2^{N-3})+2^{N-2})+2^{N-1}$

$...$

$\quad \quad = 2^N + (N-1)*2^{N-1}$

$\quad \quad = O(N*2^N)$

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top