Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2018:journals:patelk:chapter6 [2018/03/26 14:08] – [6.1 Weighted Interval Scheduling: A Recursive Procedure] patelkcourses:cs211:winter2018:journals:patelk:chapter6 [2018/03/26 16:42] (current) – [6.3 Segmented Least Squares: Multi-way Choices] patelk
Line 67: Line 67:
  
  
 +----
  
 +
 +===== 6.2 Principles of Dynamic Programming: Memoization or Iteration over Subproblems =====
 +
 +__Designing the Algorithm__
 +  * array M encodes the notion that we are using the value of optimal solutions to the subproblems on intervals {1,2,...,j} for each j
 +  * M[n] contains the value of the optimal solution on the full instance
 +  * Find-Solution can be used to trace back through M efficiently and return an optimal solution itself
 +  * Can compute the entries in M by an iterative algorithm, rather than using memoized recursion
 +
 +{{:courses:cs211:winter2018:journals:patelk:iterative-compute-opt.png?nolink&400|}}
 +
 +
 +__Analyzing the Algorithm__
 +  * Can prove by induction on j that this algorithm writes OPT(j) in array entry M[j]
 +  * As before, we can pass the filled-in array M to Find-Solution to get an optimal solution in addition to the value
 +  * Runtime: O(n) <- explicitly runs for n iterations and spends constant time in each
 +
 +__A Basic Outline of Dynamic Programming__
 +  * Iterative building up of subproblems: algorithms are often simpler to express this way
 +  * One needs a collection of subproblems derived from the original problem that satisfies a few basic properties:
 +    * Only a polynomial number of subproblems
 +    * Solution to the original problem can easily computed from the solutions to the subproblems
 +    * Natural ordering on subproblems from "smallest" to "largest,"
 +    * with an easy-to-compute recurrence that allows one to determine the solution to a subproblem from the solutions to some number of smaller subproblems.
 +  * never clear that a collection of subproblems will be useful until one finds a recurrence linking them together
 +  * but also can be difficult to think about recurrences in the absence of the "smaller" subproblems that they build on
 +
 +==== Personal Thoughts ====
 +
 +The section was short and easy to process, as the information was clear and not overly complicated. I think this section is a good set up for the section that follows, as it sets up a nice foundation. The next section will probably do a better job of applying this conceptual idea to a problem or problems.
 +
 +Readability: 8.0
 +Interesting: 6.5
 +
 +
 +----
 +
 +
 +===== 6.3 Segmented Least Squares: Multi-way Choices =====
 +
 +  * Here, the recurrence will involve "multi-way choices," meaning it is not necessarily binary. 
 +  * Each step has a polynomial number of possibilities to consider for the structure of the optimal solution
 +
 +__The Problem__
 +  * Example: line of best fit
 +    * Suppose our data consists of a set P of n points in the plane denoted (x1,y1), (x2,y2),...,(xn,yn) and x1<x2<...<xn
 +    * Given a line L defined by the equation y = ax+b, we say that the error of L with respect to P is the sum of its squared "distances" to the points in P.
 +  * Goal: find a line with minimum error
 +    * But what if using more than one line is better than using just one line of best fit
 +  * Modified Goal: fit the points well, using as few lines as possible
 +    * Change Detection: given a sequence of data points, we want to identify a few points in the sequence at which discrete change occurs
 +
 +  * //Formulating The Problem//
 +    * partition P into some number of segments
 +    * Each segment is a subset of P that represents a contiguous set of x-coordinates
 +    * For each segment S in our partition of P, we compute the line minimizing the error with respect to the points in S
 +      * The penalty of a partition is defined to be a sum of the following terms:
 +          - number of segments * a fixed, given multiplies (C>0)
 +          - for each segment, the error value of the optimal line through that segment
 +    * as we increase the number of segments, we reduce the penalty in terms of 2, but we increase the term in terms of 1
 +
 +__Designing the Algorithm__
 +  * Polynomial number of subproblems, solutions yield a solution to the original problem, build up solutions using a recurrence
 +  * For segmented least squares, the last point pn belongs to a single segment in the optimal partition, and the segment begins at some earlier point pi.
 +  * If we know the identity of the last segment, pi,...,pn, then we could remove those points from consideration and recursively sold the problem on the remaining points p1,...,pi-1
 +
 +{{:courses:cs211:winter2018:journals:patelk:lineofbestfit.png?nolink&400|}}
 +
 +  * If the last segment of the optimal partition is pi,...,pn, then the value of the optimal solution is OPT(n) = error of i through n + C + OPT(i-1)
 +  * For the subproblem on the points p1,...,pj,
 +
 +{{:courses:cs211:winter2018:journals:patelk:lineofbestfitstatement.png?nolink&400|}}
 +
 +{{:courses:cs211:winter2018:journals:patelk:segmented-least-squares-alg.png?nolink&400|}}
 +
 +  * we can trace back through array M to compute an optimum partition:
 +
 +{{:courses:cs211:winter2018:journals:patelk:find-segments-alg.png?nolink&400|}}
 +
 +**Analyzing the Algorithm**
 +  * Runtime of Segmented-Least-Squares: O(n³)
 +    * compute the values of all the least-squares errors e of i through j (eij)  
 +    * O(n²) pairs (i,j) 
 +      * For each pair, we can compute the error e of i through j in O(n) time
 +  * Algorithm has n iterations j=1,...,n.
 +    * For each value, we have to determing the minimum in the recurrence to fill in the array entry M[j]
 +      * Takes O(n) for each j -> total of O(n²)
 +  * Runtime is O(n²), once all error values have been determined.
 +
 +==== Personal Thoughts ====
 +
 +The section did a really good job laying out the algorithms in a way that was easy to follow. I think the least squares problem is also inherently easier to understand because we have worked with similar problems in math classes before. Overall, I think the algorithms and the data structures needed to solve this problem are pretty intuitive.
 +
 +Readability: 8.0
 +Interesting: 6.5
 +
 +
 +----
  
courses/cs211/winter2018/journals/patelk/chapter6.1522073336.txt.gz · Last modified: by patelk
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0