문제

I'm reviewing some old notes from my algorithms course and the dynamic programming problems are seeming a bit tricky to me. I have a problem where we have an unlimited supply of coins, with some denominations x1, x2, ... xn and we want to make change for some value X. We are trying to design a dynamic program to decide whether change for X can be made or not (not minimizing the number of coins, or returning which coins, just true or false).

I've done some thinking about this problem, and I can see a recursive method of doing this where it's something like...

MakeChange(X, x[1..n this is the coins])
    for (int i = 1; i < n; i++)
    {
        if ( (X - x[i] ==0) || MakeChange(X - x[i]) )
            return true;
    }
    return false;

Converting this a dynamic program is not coming so easily to me. How might I approach this?

도움이 되었습니까?

해결책

Your code is a good start. The usual way to convert a recursive solution to a dynamic-programming one is to do it "bottom-up" instead of "top-down". That is, if your recursive solution calculates something for a particular X using values for smaller x, then instead calculate the same thing starting at smaller x, and put it in a table.

In your case, change your MakeChange recursive function into a canMakeChange table.

canMakeChange[0] = True
for X = 1 to (your max value):
   canMakeChange[X] = False
   for i=1 to n:
     if X>=x[i] and canMakeChange[X-x[i]]==True: 
       canMakeChange[X]=True

다른 팁

My solution below is a greedy approach calculating all the solutions and cacheing the latest optimal one. If current executing solution is already larger than cached solution abort the path. Note, for best performance denomination should be in decreasing order.

import java.util.ArrayList;
import java.util.List;

public class CoinDenomination {

    int denomination[] = new int[]{50,33,21,2,1};
    int minCoins=Integer.MAX_VALUE;
    String path;

    class Node{
        public int coinValue;
        public int amtRemaining;
        public int solutionLength;
        public String path="";
        public List<Node> next;

        public String toString() { return "C: "+coinValue+" A: "+amtRemaining+" S:"+solutionLength;}
    }

    public List<Node> build(Node node)
    {
        if(node.amtRemaining==0)
        {
            if (minCoins>node.solutionLength) {
                minCoins=node.solutionLength;
                path=node.path;
            }
            return null;
        }

        if (node.solutionLength==minCoins) return null;
        List<Node> nodes = new ArrayList<Node>();
        for(int deno:denomination)
        {
           if(node.amtRemaining>=deno)
           {
               Node nextNode = new Node();
               nextNode.amtRemaining=node.amtRemaining-deno;
               nextNode.coinValue=deno;
               nextNode.solutionLength=node.solutionLength+1;
               nextNode.path=node.path+"->"+deno;
               System.out.println(node);
               nextNode.next = build(nextNode);
               nodes.add(node);

           }
        }

        return nodes;
    }

    public void start(int value)
    {
        Node root = new Node();
        root.amtRemaining=value;
        root.solutionLength=0;
        root.path="start";
        root.next=build(root);
        System.out.println("Smallest solution of coins count: "+minCoins+" \nCoins: "+path);
    }

    public static void main(String args[])
    {
        CoinDenomination coin = new CoinDenomination();
        coin.start(35);
    }
}

Just add a memoization step to the recursive solution, and the dynamic algorithm falls right out of it. The following example is in Python:

cache = {}
def makeChange(amount, coins):
    if (amount,coins) in cache:
        return cache[amount, coins]
    if amount == 0:
        ret = True
    elif not coins:
        ret = False
    elif amount < 0:
        ret = False 
    else:
        ret = makeChange(amount-coins[0], coins) or makeChange(amount, coins[1:])
    cache[amount, coins] = ret
    return ret

Of course, you could use a decorator to auto-memoize, leading to more natural code:

def memoize(f):
    cache = {}
    def ret(*args):
        if args not in cache:
            cache[args] = f(*args)
        return cache[args]
    return ret
@memoize
def makeChange(amount, coins):
    if amount == 0:
        return True
    elif not coins:
        return False
    elif amount < 0:
        return False
    return makeChange(amount-coins[0], coins) or makeChange(amount, coins[1:])

Note: even the non-dynamic-programming version you posted had all kinds of edge cases bugs, which is why the makeChange above is slightly longer than yours.

This paper is very relevant: http://ecommons.library.cornell.edu/handle/1813/6219

Basically, as others have said, making optimal change totaling an arbitrary X with arbitrary denomination sets is NP-Hard, meaning dynamic programming won't yield a timely algorithm. This paper proposes a polynomial-time (that is, polynomial in the size of the input, which is an improvement upon previous algorithms) algorithm for determining if the greedy algorithm always produces optimal results for a given set of denominations.

Here is c# version just for reference to find the minimal number of coins required for given sum:

(one may refer to my blog @ http://codingworkout.blogspot.com/2014/08/coin-change-subset-sum-problem-with.html for more details)

public int DP_CoinChange_GetMinimalDemoninations(int[] coins, int sum)
        {
            coins.ThrowIfNull("coins");
            coins.Throw("coins", c => c.Length == 0 || c.Any(ci => ci <= 0));
            sum.Throw("sum", s => s <= 0);
            int[][] DP_Cache = new int[coins.Length + 1][];
            for (int i = 0; i <= coins.Length; i++)
            {
                DP_Cache[i] = new int[sum + 1];
            }
            for(int i = 1;i<=coins.Length;i++)
            {
                for(int s=0;s<=sum;s++)
                {
                    if (coins[i - 1] == s)
                    {
                        //k, we can get to sum using just the current coin
                        //so, assign to 1, no need to process further
                        DP_Cache[i][s] = 1;
                    }
                    else 
                    {
                        //initialize the value withouth the current value
                        int minNoOfCounsWithoutUsingCurrentCoin_I = DP_Cache[i - 1][s];
                        DP_Cache[i][s] = minNoOfCounsWithoutUsingCurrentCoin_I;
                        if ((s > coins[i - 1]) //current coin can particiapte
                            && (DP_Cache[i][s - coins[i - 1]] != 0))
                        {
                            int noOfCoinsUsedIncludingCurrentCoin_I = 
                                DP_Cache[i][s - coins[i - 1]] + 1;
                            if (minNoOfCounsWithoutUsingCurrentCoin_I == 0)
                            {
                                //so far we couldnt identify coins that sums to 's'
                                DP_Cache[i][s] = noOfCoinsUsedIncludingCurrentCoin_I;
                            }
                            else
                            {   
                                int min = this.Min(noOfCoinsUsedIncludingCurrentCoin_I, 
                                    minNoOfCounsWithoutUsingCurrentCoin_I);
                                DP_Cache[i][s] = min;
                            }
                        }
                    }
                }
            }
            return DP_Cache[coins.Length][sum];
        }

In the general case, where coin values can be arbitrary, the problem you are presenting is called the Knapsack Problem, and is known to belong to NP-complete (Pearson, D. 2004), so therefore is not solvable in polynomial time such as dynamic programming.

Take the pathological example of x[2] = 51, x[1] = 50, x[0] = 1, X = 100. Then it is required that the algorithm 'consider' the possibilities of making change with coin x[2], alternatively making change beginning with x[1]. The first-step used with national coinage, otherwise known as the Greedy Algorithm -- to wit, "use the largest coin less than the working total," will not work with pathological coinages. Instead, such algorithms experience a combinatoric explosion that qualifies them into NP-complete.

For certain special coin value arrangements, such as practically all those in actual use, and including the fictitious sytem X[i+1] == 2 * X[i], there are very fast algorithms, even O(1) in the fictitious case given, to determine the best output. These algorithms exploit properties of the coin values.

I am not aware of a dynamic programming solution: one which takes advantage of optimal sub-solutions as required by the programming motif. In general a problem can only be solved by dynamic programming if it can be decomposed into sub-problems which, when optimally solved, can be re-composed into a solution which is provably optimal. That is, if the programmer cannot mathematically demonstrate ("prove") that re-composing optimal sub-solutions of the problem results in an optimal solution, then dynamic programming cannot be applied.

An example commonly given of dynamic programming is an application to multiplying several matrices. Depending on the size of the matrices, the choice to evaluate A·B·C as either of the two equivalent forms: ((A·BC) or (A·(B·C)) leads to the computations of different quantities of multiplications and additions. That is, one method is more optimal (faster) than the other method. Dynamic programming is a motif which tabulates the computational costs of different methods, and performs the actual calculations according to a schedule (or program) computed dynamically at run-time.

A key feature is that computations are performed according to the computed schedule and not by an enumeration of all possible combinations -- whether the enumeration is performed recursively or iteratively. In the example of multiplying matrices, at each step, only the least-cost multiplication is chosen. As a result, the possible costs of intermediate-cost sub-optimal schedules are never calculated. In other words, the schedule is not calculated by searching all possible schedules for the optimal, but rather by incrementally building an optimal schedule from nothing.

The nomenclature 'dynamic programming' may be compared with 'linear programming' in which 'program' is also used in the sense meaning 'to schedule.'

To learn more about dynamic programming, consult the greatest book on algorithms yet known to me, "Introduction to Algorithms" by Cormen, Leiserson, Rivest, and Stein. "Rivest" is the 'R' of "RSA" and dynamic programming is but one chapter of scores.

Cover of the recommended book.

iIf you write in a recursive way, it is fine, just use memory based search. you have to store what you have calculated, which will not be calculated again

int memory[#(coins)]; //initialize it to be -1, which means hasn't been calculated
MakeChange(X, x[1..n this is the coins], i){
    if(memory[i]!=-1) return memory[i];
    for (int i = 1; i < n; i++)
    {
        if ( (X - x[i] ==0) || MakeChange(X - x[i], i) ){
            memory[i]=true;
            return true;
        }
    }
    return false;
}
라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top