This is an old revision of the document!


Chapter 6 - Dynamic Programming

While we have seen cases where greedy algorithms can work, there are cases such that no greedy algorithm will work for a problem. Divide and conquer is another approach that we can use, but it is difficult to get from a problem with an exponential brute force solution to one with polynomial time. Dynamic programming involves dividing a problem into smaller and smaller subproblems, and then “building up” these solutions for larger subproblems until we get back to the original problem. Because we do not look at every possible solution, our dynamic programming solutions should perform better than their brute force counterparts.

Readability: 9/10

6.1 - Weighted Interval Scheduling: A Recursive Procedure

The weighted interval scheduling problem can be defined as follows: given a single resource and intervals with certain weights that would like to be scheduled on that resource, can we find a solution that optimizes the total weight while also having zero overlaps? To solve this problem, we assume we have the intervals ordered by increasing finish time. When considering an optimal solution to this problem, we know that such an optimal solution either contains the last interval or it does not. If it does not, then the optimal solution is simply the optimal solution of all other intervals. We use recursion to write this algorithm, but discover that we will be recomputing the “opt” for the same interval many times. To solve this, we store each optimal solution for an interval in a memoized array so that we do not need to recompute things. This gives us O(n) efficiency assuming the intervals come presorted. The best way to find the actual intervals then, is not by keeping track as we go through the algorithm, but by going through our memoized array after we are done. This is also O(n) efficiency.

Readability: 7/10

6.2 - Principles of Dynamic Programming: Memoization or Iteration over Subproblems

Because we can find our optimal solution by using the memoized array, we can create this array iteratively instead of recursively. That is, we solve the smallest subproblem before moving on to bigger subproblems. Then when we need to see if the solution to the current subproblem is better than the solution to the subproblem not including the current entry, we can just look it up in the memoized array. There are some conditions for which dynamic programming on a problem is possible: the number of subproblems should be polynomial, we need to be able to get our solution to the large problem easily by using the subproblems, and the subproblems need to be able to be ordered in some way from a smallest to a largest.

Readability: 8/10

6.3 - Segmented Least Squares: Multi-way Choices

We are trying to solve the problem of finding the line or lines of best fit for a set of (x,y) pairs, where each additional line adds a fixed cost to our overall penalty (which includes the squared errors). We note that the last point (ordered by x-coordinate) must belong to some segment of the graph that begins with another point. Once we know the first point in the segment, we can remove that segment and only consider what's left. Computing all the error values runs in O(n3) time, and filling in the memoized array takes O(n<sup>2</sup) time.

Readability: 6/10. This was a difficult algorithm to understand.

courses/cs211/winter2011/journals/david/chapter6.1300245593.txt.gz · Last modified: 2011/03/16 03:19 by margoliesd
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0