Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| courses:cs211:winter2018:journals:thetfordt:chapter6 [2018/03/26 18:08] โ thetfordt | courses:cs211:winter2018:journals:thetfordt:chapter6 [2018/03/26 18:32] (current) โ thetfordt | ||
|---|---|---|---|
| Line 13: | Line 13: | ||
| ===== Section 6.2 - Principles of Dynamic Programming: | ===== Section 6.2 - Principles of Dynamic Programming: | ||
| + | This section opens by speaking about the memoization technique used in 6.1 The section 6.2 will cover this technique in broader terms, as this technique lies at the heart of dynamic programming. | ||
| + | In the previous section, we used an array to maintain the optimal solution for each value, but we did this using recursion. In this section, we are given an algorithm to do this using an iterative method. It sets the first value in the array M equal to zero. And then for each j = 1,2,..., n, we let M[j] = max(value(j) + M[p(j)], | ||
| + | The section notes that both approaches to calculating this array are very similar, but that we will be using this second iterative approach from now on out. The section then lays out the three basic properties of subproblems (derived from the original problem) that must be true in order to use our dynamic programming techniques. | ||
| + | (I) There can only be a polynomial number of subproblems. | ||
| + | (II) The solution to the original problem can be easily computed from the solutions to the subproblems | ||
| + | (III) There is an intuitive ordering of subproblems from " | ||
| + | |||
| + | I found this section helpful and brisk. I would rate the readability at 10/10. | ||
| + | |||
| + | ===== Section 6.3 - Segmented Least Squares: Multi-way Choices ===== | ||
| + | |||
| + | In the previous sections, we opened the logic of solving the problems with a binary choice: either the final job lied in the solution or it did not. But this is not always the case. In this section, we will tackle a multi-way choice problem. | ||
| + | |||
| + | The problem we will be examining has to do with points on an x-y plane. We know how to minimize the sum of squares with a set of points by creating a best-fit line. But if we could have multiple lines? And what if we wanted to minimize both the sums of squares (have an accurate best-fit line), but also minimize the number of lines used in order to minimize the sums of squares. | ||
| + | |||
| + | Say that we are given a set of points on an x-y plane. We need to create a penalty of partitioning the x-values (to create a new line). We see that the penalty of a partition is | ||
| + | (i) the number of segments into which we partition P, times a fixed given multiplier C>0 | ||
| + | (ii) For each segment, the error value of the optimal line through that segment. | ||
| + | |||
| + | Our goal here is to find a partition such that we minimize our penalty. The section walks through a few subproofs before laying out the algorithm itself. First, we call e(i,j) the minimum error of any line with respect to points p_i,...,p_j (points ordered by increasing x-value). The book lays out the fact that if the last segment of the optimal partition is p_i, | ||
| + | |||
| + | The algorithm defines an array M[0,...,n], and sets M[0] = 0. ANd for all pairs with i<=j, we compute the least squares error e(i,j) for the segment p_i, | ||
| + | |||
| + | I was absent from class during the coverage of this material, so this section was a little more difficult for me to digest. Altough, I found the book's explanations fairly straight-forward. I would rate the readability of this section at 7/10. | ||
