Chapter 6

In dynamic programming, we solve problems by finding a way to decompose them into subproblems that we can then use to build a solution.

Section 1: Weighted Interval Scheduling: A Recursive Procedure

In this section, we look to solve the weighted interval scheduling problem using dynamic programming. In this problem, our goal is to maximize the value of the intervals we schedule. The key to this problem is recognizing that each interval is either in the optimal solution, or it is not, so all we need to do is choose between each one. If we don't accept a job, we can check the next job. If we do accept a job, some jobs might not be compatible so we only check the next compatible job. We are looking to maximize the total value, so we make this choice based on whichever one accomplishes this. That is, if v(j) is the value of job j and p(j) is the next job compatible with job j, we can write the value of the optimal solution for jobs 1…j as Opt(j) = max( v(j) + Opt(p(j), Opt(j-1)).

This recursive algorithm to compute Opt(j) is not very efficient however. We need to use memoization to improve the run time. This technique saves the values already computed so we are not recomputing the same value repeatedly. This gives our algorithm a run time of O(n) if the intervals are sorted by finish time.

Now we have an efficient way to find the value of the optimal solution. But we want to know which intervals to accept. We can find them by backtracking through the memoization array which also takes O(n) time.

Readability: 8

Section 2: Principles of Dynamic Programming

In the last section, we solved the weighted interval scheduling problem using memoized recursion. This process provided us with an array of optimal values we could then use to find the intervals we wanted. Its easy to see, then, that all we really needed to solve the problem was the array. We can also produce this array with an iterative algorithm, which gives us another solution.

Our dynamic programming problems will all follow the general procedure seen in the previous problem. We will find the solution in terms of subproblems and use these to recursively build up a solution that can then be converted to an iterative solution.

Readability: 8

Section 3: Segmented Least Squares: Multi-way Choices

Now we move on to a slightly more complicated problem. In this problem, we have n points in the plane and we want to fit lines to the data. Each line segment has a cost and an error. Adding these for each line in a given solution gives is the penalty for that solution. We want to minimize this penalty. To solve this problem, we first need to recognize that the last point must belong to some line. Once we know this line, the whole solution is that line plus the solution to the remaining points not fitted to that line. That is, if e(i, j) is the minimum error of a line fitted to points i through j, and C is the cost of a line, then the value of the optimal solution from 1 to n is Opt(n) = min(1<i<n)[e(i,n) + C + Opt(i - 1)]. This gives us an iterative algorithm that fills an array with the value of the optimal solution up to each point. We can then back track through this array to compute the partition. This algorithm requires O(n^3) time to compute each e(i,j) and is O(n^2) once these are calculated.

Readability: 7

Section 4: Subset Sums and Knapsacks

In this section we consider the Knapsack problem. We have a knapsack that can carry a weight of W. We also have some items with different weights and values. We want to maximize the value of the objects we put in our knapsack, without going over the weight limit.

We will first consider the simpler case of this problem where each item's value is the same. This problem is similar to the weighted interval problem (we know we either want an item or we don't), but we need an extra variable to keep track of the weight remaining in the bag. Thus we use Opt(i,w) to denote the weight of the optimal solution considering items 1 through i with max weight allowed w. This gives us a recurrence of Opt(i,w) = max[opt(i-1, w), w(i) + opt(i-1, w - w(i)]. The first option indicates we didn't choose i, the second indicates we did so we reduced the available weight by the weight of i. Of course if w(i) > w, then we cannot choose it and we have Opt(i, w) = Opt(i-1, w). This algorithm fills a two-dimensional array of size n by W and thus computes the optimal weight in O(nW) time. From the filled array, we can then find the items we should include in O(n) time.

In the general case where each object has a distinct value v(i), the recurrence becomes Opt(i,w) = max[opt(i-1, w), v(i) + opt(i-1, w - w(i)], again with Opt(i, w) = Opt(i-1, w) if w(i) > w. The run time is the same.

Readability: 7

courses/cs211/winter2012/journals/virginia/chapter6.txt · Last modified: 2012/03/26 04:53 by lovellv
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0