This is an old revision of the document!


Chapter 6

Often, we face a situation where there is no natural greedy algorithm exist and the divide and conquer algorithm is not effective in terms of reducing algorithm, such as when we do not know “what is coming next”. Then we turn to Dynamic Programming, where we carefully play around with the algorithm such that it is close to brute-force search but systematically work to save running time.

Section 1: Weighted Interval Scheduling

The first Dynamic programing problem is the weighted interval scheduling problem. It is a generalization of the one we saw at the greedy algorithm chapter, which can no longer be tackled with greedy algorithm. The brute-force search will take n! running time and of course, we want to reduce it. First step is to order the intervals by their ending times. Second, set up a function p(n) to calculate how many intervals before n is not overlapping with itself, which assures us that if we choose anything p(n) before n, it will not overlap with n. Hence, when we analyze wether or not an interval j shall be in the optimal solution given 1…n intervals, we need to see if: vj+opt(p(j))>opt(j-1), if so, then it is in, if not then it shall not be included—this operation only takes constant time, because there are only two possible cases. Alongside, with the comparison, obviously, we need to record all the optimal solutions opt(j), which takes O(n) time. The memorization will save us time and also enables us to find the intervals in linear time later on. When we finally compute out opt(n) for n intervals, all we have right at the moment is the value of the optimal weight: n. But not the actual intervals that get us the value. Thanks to the memorization, we can trace back and figure out the intervals, which takes O(n) time. Together the algorithm takes O(n) time.

Readability is 8. The section is clear and easy to follow.

Section 2: Principles of Dynamic Programming: Memorization or Iteration over Subproblems

The section sums up the basic principles of dynamic programming. The fundamental idea is to iterate through all subproblems rather than computing recursively. While we try to solve the subproblems, we record all the solutions, which in turn will help us solve the whole problem within polynomial time instead of factorial time. The basic outline/requirement for dynamic programming is listed as below: I, There are only a polynomial number of subproblems II, The solution to the nth problem can be easily (for example, within constant or linear time) computed. III, There is a natural ordering of the subproblems ( for example, the finishing time in the Weighted Interval Scheduling).

This section concisely summarize the logic behind dynamic programming. I think it quite useful before looking at more complicated examples.

Readability is 8.

Section 3: Segmented Least Squares: Multi-way Choices

This section talks about another problem to be dealt with through dynamic programming. But it is slightly different from the Weighted Interval Scheduling problem.

courses/cs211/winter2011/journals/wendy/chapter6.1300131246.txt.gz · Last modified: 2011/03/14 19:34 by shangw
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0