Question

I've been looking at the following combination sum problem:

Given a set of candidate numbers (candidates) (without duplicates) and a target number 
(target), find all unique combinations in candidates where the candidate numbers sums to 
target.

The same repeated number may be chosen from candidates unlimited number of times.

Input: candidates = [2,3,5], target = 8,
A solution set is:
[
  [2,2,2,2],
  [2,3,3],
  [3,5]
]

Most of the solutions I encountered for this problem were some variant of the following backtracking solution:-

public List<List<Integer>> combinationSum(int[] nums, int target) {
    List<List<Integer>> list = new ArrayList<>();
    backtrack(list, new ArrayList<>(), nums, target, 0);
    return list;
}

private void backtrack(List<List<Integer>> list, List<Integer> tempList, int [] nums, int 
remain, int start){
    if(remain < 0) return;
    else if(remain == 0) list.add(new ArrayList<>(tempList));
    else{ 
        for(int i = start; i < nums.length; i++){
            tempList.add(nums[i]);
            backtrack(list, tempList, nums, remain - nums[i], i); // not i + 1 because we can reuse same elements
            tempList.remove(tempList.size() - 1);
        }
    }
}

Most of the comments of these solutions implied an asymptotic complexity of O(2^N) due to this code potentially building up all subsets in the worst case.

However, this problem statement seems very similar (if not identical) to the Coin change problem

You are given coins of different denominations and a total amount of money. Write a function 
to compute the number of combinations that make up that amount. You may assume that you have 
infinite number of each kind of coin.

Input: amount = 5, coins = [1, 2, 5]
Output: 4
Explanation: there are four ways to make up the amount:
5=5
5=2+2+1
5=2+1+1+1
5=1+1+1+1+1

The most common solution I've found to this problem is the following O(N*amount) dynamic programming solution:

class Solution {
  public int change(int amount, int[] coins) {
    int[] dp = new int[amount + 1];
    dp[0] = 1;

    for (int coin : coins) {
      for (int x = coin; x < amount + 1; ++x) {
        dp[x] += dp[x - coin];
      }
    }
    return dp[amount];
  }
}

It seems to me that the first solution is just the top down variant of this algorithm.

I can't seem to figure out why the first solution is exponential and the second one is not. Where have I gone wrong in my understanding?

Is it actually printing the subsets vs counting that makes the difference here?

Is there any way print all combinations whilst preserving polynomial time?

Was it helpful?

Solution

The solution to the coin counting problem uses memoization as a way to decrease the exponential nature into just polynomial time.

The trick here, is that the amount, is always decreasing and can be computed from all of the smaller leftover amounts. this ensures that we need to compute up to amount values, that every next value is easy to compute from the ones before it.

Now for the main question: you are right in that getting the subsets themselves is what makes the difference - now the amount of 'values' you need to compute is the number you can calculate from the coin problem (for example, calculating there are 150 possible ways to distribute the coins, means you need to actually find all of the 150 solutions). this number is not as small as amount and can be exponentially big in it

Shortly: calculating the number of solutions can be easily bounded by poly time, but calculating every solution means doing at least O(number of solutions) operations

I hope that helped! im new here :p

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top