| Both sides previous revisionPrevious revisionNext revision | Previous revision |
| courses:cs211:winter2018:journals:boyese:chapter6 [2018/03/27 21:34] – [Section 6.1: Weighted Interval Scheduling: A Recursive Procedure] boyese | courses:cs211:winter2018:journals:boyese:chapter6 [2018/03/27 23:00] (current) – [Section 6.3: Segmented Least Squares: Multi-way Choices] boyese |
|---|
| |
| ====Section 6.2: Principles of Dynamic Programming: Memoization or Iteration over Subproblems==== | ====Section 6.2: Principles of Dynamic Programming: Memoization or Iteration over Subproblems==== |
| | |
| | Once we have the array M, the problem is solved: M[n] contains the value of the optimal solution on the full instance, and Find-Solution can be used to trace back through M efficiently and return an optimal solution itself. We can directly compute the entries in M by an iterative algorithm, rather than using memoized recursion. This algorithm is below: |
| |
| <code> | <code> |
| </code> | </code> |
| |
| | The running time of Iterative-Compute-0pt is clearly O(n), since it explicitly runs for n iterations and spends constant time in each. |
| |
| | To set about developing an algorithm based on dynamic programming, one needs a collection of subproblems derived from the original problem that satisfies a few basic properties: |
| | * There are only a polynomial number of subproblems |
| | * The solution to the original problem can be easily computed from the solutions to the subproblems |
| | * There is a natural ordering on subproblems from "smallest" to "largest" together with an easy-to-compute recurrence that allows one to determine the solution to a subproblem from the solutions to some number of smaller subproblems |
| |
| ====Section 6.3: Segmented Least Squares: Multi-way Choices==== | ====Section 6.3: Segmented Least Squares: Multi-way Choices==== |
| | |
| | In the problem we consider here, the recurrence will involve what might be called "multi-way choices": at each step, we have a polynomial number of possibilities to consider for the structure of the optimal solution. This problem involves finding a line of best fit for a collection of points on a graph. This process is used frequently in statistics and called regression, but in this case the points do not fall along a straight line. Any single line through the points on a polynomial graph would have a terrible error; but if we use two lines, we could achieve quite a small error. So we could try formulating a new problem as follows: Rather than seek a single line of best fit, we are allowed to pass an arbitrary set of lines through the points, and we seek a set of lines that minimizes the error. We need a problem formulation that requires us to fit the points well, using as few lines as possible. We now formulate a problem--the Segmented Least Squares Problem--that captures these issues quite cleanly. |
| | |
| | Suppose we let OPT(i) denote the optimum solution for the points p<sub>1</sub>, ..., p<sub>i</sub> and we let e<sub>i, j</sub> denote the minimum error of any line with respect to p<sub>i</sub>, p<sub>i + 1</sub>, ..., p<sub>j</sub>. Then our observation above says that if the last segment of the optimal partition is p<sub>i</sub>, ..., p<sub>n</sub>, then the value of the optimal solution is OPT(n) = e<sub>i, n</sub> + C + OPT(i - 1). The algorithm is as follows: |
| | |
| | <code> |
| | Segmented-Least-Squares(n) |
| | Array M[0... n] |
| | Set M[0] = 0 |
| | For all pairs i < j |
| | Compute the least squares error e<sub>i, j</sub> for the segment p<sub>i</sub>, ..., p<sub>j</sub> |
| | Endfor |
| | For j = 1, 2, ..., n |
| | Use the recurrence (6.7) to compute M[j] |
| | Endfor |
| | Return M[n] |
| | </code> |
| | |
| | The algorithm to compute an optimum partition is as follows: |
| | <code> |
| | Find-Segments(j) |
| | If j = 0 then |
| | Output nothing |
| | Else |
| | Find an i that minimizes e<sub>i, j</sub> + C + M[i - 1] |
| | Output the segment {p<sub>i</sub>, ..., p<sub>1</sub>]} and the result of |
| | Find-Segments (i - 1) |
| | Endif |
| | </code> |
| | |
| | **Analyzing The Run Time** |
| | |
| | There are O(n<sup>2</sup>) pairs (i, j) for which this computation is needed; and for each pair (i, j); we can use the formula given at the beginning of this section to compute e<sub>i, j</sub> in O(n) time. Thus the total running time to compute all e<sub>i, j</sub> values is O(n<sup>3</sup>). The algorithm has n iterations, for values j = 1, ..., n. For each value of j, we have to determine the minimum in the recurrence (6.7) to fill in the array entry M[j]; this takes time O(n) for each j, for a total of O(n<sup>2</sup>). Thus the running time is O(n<sup>2</sup>) once all the e<sub>i, j</sub> values have been determined. |