Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
courses:cs211:winter2018:journals:patelk:chapter6 [2018/03/25 23:44] – created patelkcourses:cs211:winter2018:journals:patelk:chapter6 [2018/03/26 16:42] (current) – [6.3 Segmented Least Squares: Multi-way Choices] patelk
Line 1: Line 1:
 ====== Chapter 6 ====== ====== Chapter 6 ======
  
-  * k+  * In general, most problems do not have a natural greedy algorithm solution that works 
 +  * Divide and conquer is often not strong enough to reduce exponential brute-force search down to polynomial time 
 +  * Dynamic Programming: one implicitly explores the space of all possible solutions, by decomposing things into a series of subproblems, and then building up correct solutions to larger and larger subproblems
  
 ===== 6.1 Weighted Interval Scheduling: A Recursive Procedure ===== ===== 6.1 Weighted Interval Scheduling: A Recursive Procedure =====
  
-  * k+  * Each interval has a certain value (or weight), and we want to accept a set of maximum value. 
 + 
 +__Designing a Recursive Algorithm__ 
 +  * n requests (1,...,n) 
 +  * each request i specifies a start time si and finish time fi. 
 +  * each request i also has a value or weight (vi) 
 +  * compatible intervals: do not overlap 
 +  * goal: select a subset of mutually compatible intervals so as to maximize the sum of the values of the selected intervals 
 +  * sort requests in order of nondecreasing finish time 
 +  * define p(j), for an interval j, to be the largest index i<j such that intervals i and j are disjoint 
 +    * leftmost interval that ends before j begins 
 +  * p(j) = 0, if no request i<j is disjoint from j 
 +  * Optimal solution O: either interval n belongs to O or it doesn't  
 +    * if it is, no intervals indexed between p(n) and n can belong to O 
 +    * if it is not, O is equal to the optimal solution of request s {1,n-1} 
 +  * For any value of j between 1 and n, let Oj denote the optimal solution to the problem consisting of requests {1,...,j} and let OPT(j) denote the value of this solution 
 +  * OPT(j) = max(vj + OPT(p(j)), OPT(j-1)) 
 +    * request j belongs to an optimal solution on the set {1,2,..,j} if and only if vj + OPT(p(j)) >= OPT)j-1) 
 + 
 +{{:courses:cs211:winter2018:journals:patelk:computeopt.png?nolink&400|}} 
 + 
 +  * Compute-Opt Subproblems grow very quickly 
 + 
 +{{:courses:cs211:winter2018:journals:patelk:computeoptsubproblems.png?nolink&400|}} 
 + 
 +  * we have not achieved a polynomial-time solution 
 + 
 +**Memoizing the Recursion** 
 +  * Previous algorithm runs in exponential time because of the redundancy of how many times it issues a compute-opt() call 
 +  * Memoization: saving values that have already been computed in a globally-accessible place  
 +    * Have an array M[0...n], where M[j} will start with the value "empty" and then will store the value of Compute-Opt(j) as soon as it is first determined.  
 + 
 +{{:courses:cs211:winter2018:journals:patelk:m-computeopt1.png?nolink&400|}} 
 + 
 +{{:courses:cs211:winter2018:journals:patelk:m-computeopt2.png?nolink&400|}} 
 + 
 +__Analyzing the Memoized Version__ 
 +  * Runtime: O(n) <- assuming sorted input intervals by finish times 
 +  * Runtime is bounded by a constant times the number of calls ever issued 
 +  * No explicit upper bound on the number of calls, so we look for a good measure of "progess" 
 +  * Useful progress measure: number of entries in M that are not "empty" 
 +  * M has only n+1 entries <- at most O(n) calls 
 + 
 +__Computing a Solution in Addition to Its Value__ 
 +  * We could maintain an array S so that S[i] contains an optimal set of intervals among {1,2,...,i}.  
 +  * If we explicitly maintain S, we increase runtime by an additional factor of O(n) 
 +  * To avoid this, we recover the optimal solution from values saved in the array M after the optimum value has been computed 
 +    * we want to "trace back" through array M to find the set of intervals in the optimal solution 
 +    * Find-Solution: calls itself recursively on strictly smaller values, makes a total of O(n) recursive calls and spends constant time per call. 
 + 
 +{{:courses:cs211:winter2018:journals:patelk:find-solution-optimal.png?nolink&400|}} 
 + 
 +==== Personal Thoughts ==== 
 + 
 +The actual algorithms in this section were a little bit difficult to follow, but the concept of memoization makes a lot of sense to me. I think this week's problem set will help me understand the concept of finding an optimal solution using dynamic programming better. 
 + 
 +Readability: 6.0 
 +Interesting: 6.0 
 + 
 + 
 +---- 
 + 
 + 
 +===== 6.2 Principles of Dynamic Programming: Memoization or Iteration over Subproblems ===== 
 + 
 +__Designing the Algorithm__ 
 +  * array M encodes the notion that we are using the value of optimal solutions to the subproblems on intervals {1,2,...,j} for each j 
 +  * M[n] contains the value of the optimal solution on the full instance 
 +  * Find-Solution can be used to trace back through M efficiently and return an optimal solution itself 
 +  * Can compute the entries in M by an iterative algorithm, rather than using memoized recursion 
 + 
 +{{:courses:cs211:winter2018:journals:patelk:iterative-compute-opt.png?nolink&400|}} 
 + 
 + 
 +__Analyzing the Algorithm__ 
 +  * Can prove by induction on j that this algorithm writes OPT(j) in array entry M[j] 
 +  * As before, we can pass the filled-in array M to Find-Solution to get an optimal solution in addition to the value 
 +  * Runtime: O(n) <- explicitly runs for n iterations and spends constant time in each 
 + 
 +__A Basic Outline of Dynamic Programming__ 
 +  * Iterative building up of subproblems: algorithms are often simpler to express this way 
 +  * One needs a collection of subproblems derived from the original problem that satisfies a few basic properties: 
 +    * Only a polynomial number of subproblems 
 +    * Solution to the original problem can easily computed from the solutions to the subproblems 
 +    * Natural ordering on subproblems from "smallest" to "largest," 
 +    * with an easy-to-compute recurrence that allows one to determine the solution to a subproblem from the solutions to some number of smaller subproblems. 
 +  * never clear that a collection of subproblems will be useful until one finds a recurrence linking them together 
 +  * but also can be difficult to think about recurrences in the absence of the "smaller" subproblems that they build on 
 + 
 +==== Personal Thoughts ==== 
 + 
 +The section was short and easy to process, as the information was clear and not overly complicated. I think this section is a good set up for the section that follows, as it sets up a nice foundation. The next section will probably do a better job of applying this conceptual idea to a problem or problems. 
 + 
 +Readability: 8.0 
 +Interesting: 6.5 
 + 
 + 
 +---- 
 + 
 + 
 +===== 6.3 Segmented Least Squares: Multi-way Choices ===== 
 + 
 +  * Here, the recurrence will involve "multi-way choices," meaning it is not necessarily binary.  
 +  * Each step has a polynomial number of possibilities to consider for the structure of the optimal solution 
 + 
 +__The Problem__ 
 +  * Example: line of best fit 
 +    * Suppose our data consists of a set P of n points in the plane denoted (x1,y1), (x2,y2),...,(xn,yn) and x1<x2<...<xn 
 +    * Given a line L defined by the equation y = ax+b, we say that the error of L with respect to P is the sum of its squared "distances" to the points in P. 
 +  * Goal: find a line with minimum error 
 +    * But what if using more than one line is better than using just one line of best fit 
 +  * Modified Goal: fit the points well, using as few lines as possible 
 +    * Change Detection: given a sequence of data points, we want to identify a few points in the sequence at which discrete change occurs 
 + 
 +  * //Formulating The Problem// 
 +    * partition P into some number of segments 
 +    * Each segment is a subset of P that represents a contiguous set of x-coordinates 
 +    * For each segment S in our partition of P, we compute the line minimizing the error with respect to the points in S 
 +      * The penalty of a partition is defined to be a sum of the following terms: 
 +          - number of segments * a fixed, given multiplies (C>0) 
 +          - for each segment, the error value of the optimal line through that segment 
 +    * as we increase the number of segments, we reduce the penalty in terms of 2, but we increase the term in terms of 1 
 + 
 +__Designing the Algorithm__ 
 +  * Polynomial number of subproblems, solutions yield a solution to the original problem, build up solutions using a recurrence 
 +  * For segmented least squares, the last point pn belongs to a single segment in the optimal partition, and the segment begins at some earlier point pi. 
 +  * If we know the identity of the last segment, pi,...,pn, then we could remove those points from consideration and recursively sold the problem on the remaining points p1,...,pi-1 
 + 
 +{{:courses:cs211:winter2018:journals:patelk:lineofbestfit.png?nolink&400|}} 
 + 
 +  * If the last segment of the optimal partition is pi,...,pn, then the value of the optimal solution is OPT(n) = error of i through n + C + OPT(i-1) 
 +  * For the subproblem on the points p1,...,pj, 
 + 
 +{{:courses:cs211:winter2018:journals:patelk:lineofbestfitstatement.png?nolink&400|}} 
 + 
 +{{:courses:cs211:winter2018:journals:patelk:segmented-least-squares-alg.png?nolink&400|}} 
 + 
 +  * we can trace back through array M to compute an optimum partition: 
 + 
 +{{:courses:cs211:winter2018:journals:patelk:find-segments-alg.png?nolink&400|}} 
 + 
 +**Analyzing the Algorithm** 
 +  * Runtime of Segmented-Least-Squares: O(n³) 
 +    * compute the values of all the least-squares errors e of i through j (eij)   
 +    * O(n²) pairs (i,j)  
 +      * For each pair, we can compute the error e of i through j in O(n) time 
 +  * Algorithm has n iterations j=1,...,n. 
 +    * For each value, we have to determing the minimum in the recurrence to fill in the array entry M[j] 
 +      * Takes O(n) for each j -> total of O(n²) 
 +  * Runtime is O(n²), once all error values have been determined. 
 + 
 +==== Personal Thoughts ==== 
 + 
 +The section did a really good job laying out the algorithms in a way that was easy to follow. I think the least squares problem is also inherently easier to understand because we have worked with similar problems in math classes before. Overall, I think the algorithms and the data structures needed to solve this problem are pretty intuitive. 
 + 
 +Readability: 8.0 
 +Interesting: 6.5 
 + 
 + 
 +---- 
courses/cs211/winter2018/journals/patelk/chapter6.1522021450.txt.gz · Last modified: by patelk
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0