Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision |
courses:cs211:winter2012:journals:virginia:chapter6 [2012/03/26 04:36] โ [Section 3: Segmented Least Squares: Multi-way Choices] lovellv | courses:cs211:winter2012:journals:virginia:chapter6 [2012/03/26 04:53] (current) โ [Section 4: Subset Sums and Knapsacks] lovellv |
---|
| |
Now we move on to a slightly more complicated problem. In this problem, we have n points in the plane and we want to fit lines to the data. Each line segment has a cost and an error. Adding these for each line in a given solution gives is the penalty for that solution. We want to minimize this penalty. To solve this problem, we first need to recognize that the last point must belong to some line. Once we know this line, the whole solution is that line plus the solution to the remaining points not fitted to that line. That is, if e(i, j) is the minimum error of a line fitted to points i through j, and C is the cost of a line, then the value of the optimal solution from 1 to n is Opt(n) = min(1<i<n)[e(i,n) + C + Opt(i - 1)]. This gives us an iterative algorithm that fills an array with the value of the optimal solution up to each point. We can then back track through this array to compute the partition. This algorithm requires O(n^3) time to compute each e(i,j) and is O(n^2) once these are calculated. | Now we move on to a slightly more complicated problem. In this problem, we have n points in the plane and we want to fit lines to the data. Each line segment has a cost and an error. Adding these for each line in a given solution gives is the penalty for that solution. We want to minimize this penalty. To solve this problem, we first need to recognize that the last point must belong to some line. Once we know this line, the whole solution is that line plus the solution to the remaining points not fitted to that line. That is, if e(i, j) is the minimum error of a line fitted to points i through j, and C is the cost of a line, then the value of the optimal solution from 1 to n is Opt(n) = min(1<i<n)[e(i,n) + C + Opt(i - 1)]. This gives us an iterative algorithm that fills an array with the value of the optimal solution up to each point. We can then back track through this array to compute the partition. This algorithm requires O(n^3) time to compute each e(i,j) and is O(n^2) once these are calculated. |
| |
| Readability: 7 |
| |
| ===== Section 4: Subset Sums and Knapsacks ===== |
| |
| In this section we consider the Knapsack problem. We have a knapsack that can carry a weight of W. We also have some items with different weights and values. We want to maximize the value of the objects we put in our knapsack, without going over the weight limit. |
| |
| We will first consider the simpler case of this problem where each item's value is the same. This problem is similar to the weighted interval problem (we know we either want an item or we don't), but we need an extra variable to keep track of the weight remaining in the bag. Thus we use Opt(i,w) to denote the weight of the optimal solution considering items 1 through i with max weight allowed w. This gives us a recurrence of Opt(i,w) = max[opt(i-1, w), w(i) + opt(i-1, w - w(i)]. The first option indicates we didn't choose i, the second indicates we did so we reduced the available weight by the weight of i. Of course if w(i) > w, then we cannot choose it and we have Opt(i, w) = Opt(i-1, w). This algorithm fills a two-dimensional array of size n by W and thus computes the optimal weight in O(nW) time. From the filled array, we can then find the items we should include in O(n) time. |
| |
| In the general case where each object has a distinct value v(i), the recurrence becomes Opt(i,w) = max[opt(i-1, w), v(i) + opt(i-1, w - w(i)], again with Opt(i, w) = Opt(i-1, w) if w(i) > w. The run time is the same. |
| |
| Readability: 7 |
| |