Question

In the book I am using Introduction to the Design & Analysis of Algorithms, dynamic programming is said to focus on the Principle of Optimality, "An optimal solution to any instance of an optimization problem is composed of optimal solutions to its subinstances".

Whereas, the greedy technique focuses on expanding partially constructed solutions until you arrive at a solution for a complete problem. It is then said, it must be "the best local choice among all feasible choices available on that step".

Since both involve local optimality, isn't one a subset of the other?

Was it helpful?

Solution

Dynamic programming is applicable to problems exhibiting the properties of:

  • overlapping subproblems, and
  • optimal substructure.

Optimal substructure means that you can greedily solve subproblems and combine the solutions to solve the larger problem. The difference between dynamic programming and greedy algorithms is that with dynamic programming, there are overlapping subproblems, and those subproblems are solved using memoization. "Memoization" is the technique whereby solutions to subproblems are used to solve other subproblems more quickly.

This answer has gotten some attention, so I'll give some examples.

Consider the problem "Making change with dollars, nickels, and pennies." This is a greedy problem. It exhibits optimal substructure because you can solve for the number of dollars. Then, solve for the number of nickels. Then the number of pennies. You can then combine the solutions to these subproblems efficiently. It does not really exhibit overlapping subproblems since solving each subproblem doesn't help much with the others (maybe a little bit).

Consider the problem "Fibonnaci numbers." It exhibits optimal substructure because you can solve F(10) from F(9) and F(8) efficiently (by addition). These subproblems overlap because they both share F(7). If you memoize the result of F(7) when you're solving F(8), you can solve F(9) more quickly.

In response to the comment about dynamic programming having to do with "reconsidering decisions": This is obviously not true for any linear dynamic programming algorithm like the maximum subarray problem or the Fibonacci problem above.

Essentially, imagine a problem having optimal substructure as a directed acyclic graph whose nodes represent subproblems (wherein the whole problem is represented by a node whose indegree is zero), and whose directed edges represent dependencies between subproblems. Then, a greedy problem is a tree (all nodes except the root have unit indegree). A dynamic programming problem has some nodes with indegree greater than one. This illustrates the overlapping subproblems.

OTHER TIPS

The difference is that dynamic programming requires you to remember the answer for the smaller states, while a greedy algorithm is local in the sense that all the information needed is in the current state. Of course, there is some intersection.

The key distinction is that greedy algorithms compose solutions "statically" in the sense that each local choice in the solution can be finalized without needing to know anything about the other local choices made. Dynamic algorithms, however, create sets of possible solutions to sub-problems and only generate a single solution to the global problem when all sub-problems have been considered. The Wikipedia page on greedy algorithms puts it well:

The choice made by a greedy algorithm may depend on choices made so far but not on future choices or all the solutions to the subproblem. It iteratively makes one greedy choice after another, reducing each given problem into a smaller one. In other words, a greedy algorithm never reconsiders its choices. This is the main difference from dynamic programming, which is exhaustive and is guaranteed to find the solution. After every stage, dynamic programming makes decisions based on all the decisions made in the previous stage, and may reconsider the previous stage's algorithmic path to solution.

DP algorithms uses the fact that (for some problems) - an optimal solution to a problem of size n is composed of an optimal solution to a problem of size n'<n, and uses this to build the solution bottom-up, from the smallest problem to the required size.

It fits the same principle of recursion very well (reduce the problem to a smaller sub-problem, and invoke recursively), and indeed - DP solutions are often represented as a recursive formula.

Greedy algorithms are looking at a local point and doing some choice with the data at this point. For some problems (shortest path without negatve weights for example) - this local choice will lead to an optimal solution.

A good example of the differences between the two approaches is for the shortest path problem:

  • Dijsktra's Algorithm is a greedy approach (At each step, chose the node that the path to it is currently minimized - the choice is done greedily based on the local state of the algorithm).
  • Bellman-Ford algorithm is a DP solution ("relax" ALL edges effectively reducing the problem)

Greedy method:

  1. Greedy method focuses on expanding partially constructed solutions.
  2. It provides many results such as feasible solution.
  3. More efficient

Dynamic programming:

  1. Focuses on principle of optimality.
  2. It provides specific answers.
  3. Less efficient

The difference between DP and greedy is DP will look for the global optimal at each subproblem, but greedy will only look for the local optimal. So, this about this scenario:

Suppose you are climbing a mountain, and you want to climb as high as possible. The road on the mountain has several branches, and at each intersection you need to decide which branch to take, which is the subproblem of this climbing problem (the goal is the same, only the start point is different)

For the greedy algorithm, you will always choose whichever one seems more steeply up. This is a local optimal decision and is not guaranteed to lead to the best result

For the DP, at each intersection, you should already know the highest altitude each branch will lead you to (suppose your evaluation order is reversed, a.k.a from end points to the starting point), and choose the one with the largest altitude. This decision is build upon the global optimum of the future subproblems and there for will be globally optimum for this subproblem

The concepts of greedy and dynamic solutions are not mutually exclusive and I think this is causing a lot of confusion in most answers. I believe amit's answer stresses on the most important property: a greedy solution makes decisions based on local information. As a consequence a greedy solution may end up finding a local optimum instead of a global one. Dynamic solutions break a problem into smaller subproblems and then aggregate the result to obtain the answer for a bigger more complex problem. So - is it possible that a problem is both dynamic and greedy? The answer is - yes it is possible. An example would be Dijkstra's algorithm. For this algorithm you make a greedy choice on each step and yet you reduce the problem to a simpler subproblem.

Still there are also examples of greedy algorithms that are not DP-s: say hill climbing is a greedy algorithm that does not break a problem into multiple subproblems - it only always solves one. There are also examples of DPs that are not greedy - e.g. computing the n-th Fibonacci number using memoization is not greedy.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top