This is an old revision of the document!
Table of Contents
Chapter 6 - Dynamic Programming
Section 6.1 - Weighted Interval Scheduling: A Recursive Procedure
The section begins by outlining the problem which we are seeking to solve. It is the same interval scheduling problem from earlier in the text, except now, the intervals have weights assigned to them and we want to maximize the total weight.
The section then outlines a recursive structure for solving this problem. We first consider the final job: either that job is in the optimal solution, or that job is not in the optimal solution. Therefore, if there are j jobs, Opt(j) = max (value(j) + Opt(p(j)),Opt(j-1)), where p(j) is the first available job behind the start time of job j. And this leads directly into the algorithm for computing the optimal solution. If j == 0, we Return 0, otherwise, we return max (value(j) + Opt(p(j)),Opt(j-1)). And note that, within this algorithm we are computing the optimal solution for each j = 1,2,…, n.
If we just run the algorithm as is, then it would run in polynomial time, having to run through the same recursions over and over. But we have a strategy for combatting this: memoization. We simply keep track of the previously-calculated values, and this way, the runtime is reduced to O(n) if we assume that the input intervals are sorted by their finish times. The section proceeds to walk through a clear example of how to output a solution from the algorithm, that is, the exact jobs scheduled while maintaining linear time.
I found this section intuitive and straight-forward. I had also grasped the material very well from the class when this was covered. I would rate the readability at 9/10.
Section 6.2 - Principles of Dynamic Programming: Memoization or Iteration over Subproblems
This section opens by speaking about the memoization technique used in 6.1 The section 6.2 will cover this technique in broader terms, as this technique lies at the heart of dynamic programming.
In the previous section, we used an array to maintain the optimal solution for each value, but we did this using recursion. In this section, we are given an algorithm to do this using an iterative method. It sets the first value in the array M equal to zero. And then for each j = 1,2,…, n, we let M[j] = max(value(j) + M[p(j)],M[j-1]). And this way the calls back into earlier in the array are time-constant as the for-loop progresses. This algorithm runs in O(n) time.
The section notes that both approaches to calculating this array are very similar, but that we will be using this second iterative approach from now on out. The section then lays out the three basic properties of subproblems (derived from the original problem) that must be true in order to use our dynamic programming techniques. (I) There can only be a polynomial number of subproblems. (II) The solution to the original problem can be easily computed from the solutions to the subproblems (III) There is an intuitive ordering of subproblems from “smallest” to “largest” such that there exists an easy-to-compute recurrence.
I found this section helpful and brisk. I would rate the readability at 10/10.
