This is an old revision of the document!
Table of Contents
Chapter 6
In dynamic programming, we solve problems by finding a way to decompose them into subproblems that we can then use to build a solution.
Section 1: Weighted Interval Scheduling: A Recursive Procedure
In this section, we look to solve the weighted interval scheduling problem using dynamic programming. In this problem, our goal is to maximize the value of the intervals we schedule. The key to this problem is recognizing that each interval is either in the optimal solution, or it is not, so all we need to do is choose between each one. If we don't accept a job, we can check the next job. If we do accept a job, some jobs might not be compatible so we only check the next compatible job. We are looking to maximize the total value, so we make this choice based on whichever one accomplishes this. That is, if v(j) is the value of job j and p(j) is the next job compatible with job j, we can write the value of the optimal solution for jobs 1…j as Opt(j) = max( v(j) + Opt(p(j), Opt(j-1)).
This recursive algorithm to compute Opt(j) is not very efficient however. We need to use memoization to improve the run time. This technique saves the values already computed so we are not recomputing the same value repeatedly. This gives our algorithm a run time of O(n) if the intervals are sorted by finish time.
Now we have an efficient way to find the value of the optimal solution. But we want to know which intervals to accept. We can find them by backtracking through the memoization array which also takes O(n) time.
Readability: 8
Section 2: Principles of Dynamic Programming
In the last section, we solved the weighted interval scheduling problem using memoized recursion. This process provided us with an array of optimal values we could then use to find the intervals we wanted. Its easy to see, then, that all we really needed to solve the problem was the array. We can also produce this array with an iterative algorithm, which gives us another solution.
Our dynamic programming problems will all follow the general procedure seen in the previous problem. We will find the solution in terms of subproblems and use these to recursively build up a solution that can then be converted to an iterative solution.
Readability: 8
Section 3: Segmented Least Squares: Multi-way Choices
Now we move on to a slightly more complicated problem. In this problem, we have n points in the plane and we want to fit lines to the data. Each line segment has a cost and an error. Adding these for each line in a given solution gives is the penalty for that solution. We want to minimize this penalty. To solve this problem, we first need to recognize that the last point must belong to some line. Once we know this line, the whole solution is that line plus the solution to the remaining points not fitted to that line. That is, if e(i, j) is the minimum error of a line fitted to points i through j, and C is the cost of a line, then the value of the optimal solution from 1 to n is Opt(n) = e(i,n) + C + Opt(j + 1)