Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2018:journals:nasona:chapter6 [2018/03/25 11:45] – [6.2 Principles of Dynamic Programming] nasonacourses:cs211:winter2018:journals:nasona:chapter6 [2018/03/25 12:15] (current) – [6.1 Weighted Interval Scheduling] nasona
Line 1: Line 1:
 =======6.1 Weighted Interval Scheduling======= =======6.1 Weighted Interval Scheduling=======
 ==Summary== ==Summary==
 +There is no known greedy solution for this problem. Using a straightforward algorithm, we would get an exponential algorithm. To decrease the runtime and redundancy of the computations, we can use memoization. Memoization is the technique of saving values that have already been computed. The runtime of M-Compute-Opt(n), the version of the algorithm that uses memoization, is O(n). However, we are not done. We need to also retroactively compute the components that make up the optimal solution (not just the value). Therefore, we need to define a second part to the algorithm that does that, which takes O(n) time.
  
 ==Designing a Recursive Algorithm== ==Designing a Recursive Algorithm==
Line 27: Line 28:
   * The fact that the algorithm as written take exponential time is due to the redundancy in the number of times it issues each of these calls   * The fact that the algorithm as written take exponential time is due to the redundancy in the number of times it issues each of these calls
   * Solution: store the value of Compute-Opt in a globally accessible place the first time we compute it and then simply use this precomputed value in place of all future recursive calls   * Solution: store the value of Compute-Opt in a globally accessible place the first time we compute it and then simply use this precomputed value in place of all future recursive calls
-  * Memorization: the technique of saving values that have already been computed+  * Memoization: the technique of saving values that have already been computed
   * Memorized Compute-Opt would make use of array M[0…n]; M[j] will start with the value “empty” but will hold the value of Compute-Opt(j) as soon as it is first determined   * Memorized Compute-Opt would make use of array M[0…n]; M[j] will start with the value “empty” but will hold the value of Compute-Opt(j) as soon as it is first determined
  
Line 49: Line 50:
   * Avoid runtime blow-up by not explicitly computing S but rather by recovering the optimal solution from values saved in the array M after the optimum value has been computed   * Avoid runtime blow-up by not explicitly computing S but rather by recovering the optimal solution from values saved in the array M after the optimum value has been computed
  
-Retroactively Computing Optimal Solution+Retroactively Finding Optimal Solution
      Find-Solution(j):      Find-Solution(j):
       If j=0:       If j=0:
Line 65: Line 66:
  
 ==Additional Information== ==Additional Information==
 +One of the things that confused me in class that made more sense after I read this section was why we wanted to compute the optimal solution after finding the value, but now it makes sense. T computing the optimal solution algorithm merely outputs the value and nothing else. However, by defining an extra function that retroactively computes the choices made to lead to the optimal solution would be very helpful when applied in the real world. 
 +
 +On a scale of 1 to 10, the readability of this section was an 8. The authors did a really good job of explaining memoization and why it is necessary for this problem. They also did a great job introducing dynamic programming because they started with a problem that is semi-familiar to us.
 =======6.2 Principles of Dynamic Programming ======= =======6.2 Principles of Dynamic Programming =======
 ==Summary== ==Summary==
 +We can find the optimal solution to the Weighted Interval Scheduling problem by using an iterative algorithm instead of a recursive algorithm. This is helpful because they run in the same time, but in general, we would use the iterative version because typically the recursive version is more difficult to comprehend. Iterative-Compute-Opt runs in O(n) time. There are three basic principles of dynamic programming. They are there must be a polynomial number of subproblems, the solution to the original problem must be easily computed from the solution of the subproblems, and there is a natural order of the subprobelms from “smallest” to “largest” with an easy-to-compute recurrence that allows one to determine the solution to a subproblem from the solutions to some number of smaller subproblems.
  
 ==Designing the Algorithm== ==Designing the Algorithm==
   * Basic principles of dynamic programming: iterating over subproblems rather than computing solutions recursively   * Basic principles of dynamic programming: iterating over subproblems rather than computing solutions recursively
-  * Can directly compute the entries in M by iterative algorithm rather than meoized recursion+  * Can directly compute the entries in M by iterative algorithm rather than meomized recursion
  
 Weighted Intervals Iterative Solution Weighted Intervals Iterative Solution
Line 86: Line 91:
     * Polynomial number of subproblems     * Polynomial number of subproblems
     * Solution to original problem easily computed from the solutions to subproblems     * Solution to original problem easily computed from the solutions to subproblems
-    * Natural ordering on subprobelms from “smallest” to “largest” together with an easy-to-compute recurrence that allows on the determine the solution to a subproblem from the solutions to some number of smaller subproblems +    * Natural ordering on subprobelms from “smallest” to “largest” together with an easy-to-compute recurrence that allows one to determine the solution to a subproblem from the solutions to some number of smaller subproblems 
-    * Never clear that a collection of subproblems will be useful until one finds a recurrence linking them together; but it can be difficult to think about recurrences in the absence of the “smallest” subproblems that they build on+  * Never clear that a collection of subproblems will be useful until one finds a recurrence linking them together; but it can be difficult to think about recurrences in the absence of the “smallest” subproblems that they build on
  
 ==Additional Information== ==Additional Information==
 +One thing in class that made more sense after reading this section was the definition of dynamic programming. I got the gist of it in class with the problems we were using it for to solve. However, I just was lacking the definition/principles, which are there must be a polynomial number of subproblems, the solution to the original problem must be easily computed from the solution of the subproblems, and there is a natural order of the subprobelms from “smallest” to “largest” with an easy-to-compute recurrence that allows one to determine the solution to a subproblem from the solutions to some number of smaller subproblems.
  
 +On a scale of 1 to 10, the readability of this section was a 9. This was pretty much an informational section. We merely learned an iterative version of the solution to the problem discussed in the section before, which uses loops so it is easy to comprehend. Also, the rest of the section was just explaining the principles of dynamic programming, so it was heavily definition based.
 =======6.3 Segmented Least Squares======= =======6.3 Segmented Least Squares=======
 +==Summary== 
 +The problem can be defined as finding the partition of points as to minimize penalty. In other words, we want to minimize the function E+cL, were E is the error, c is the cost constant, and L is the number of lines. The value of the optimal solution is OPT(n) = ei,n+c+OPT(i-1). We have two parts of the algorithm to define: one to optimize the value and the other to retroactively compute the optimal solution. The overall runtime of the algorithm is O(n^2).
 ==The Problem== ==The Problem==
   * Suppose our data consists of a set P of n points in the plane denoted (x1, y1) … (xn, yn); suppose x1<x2<…<xn   * Suppose our data consists of a set P of n points in the plane denoted (x1, y1) … (xn, yn); suppose x1<x2<…<xn
Line 147: Line 155:
   * Therefore, algorithm has overall runtime of O(n^2)   * Therefore, algorithm has overall runtime of O(n^2)
  
 +==Additional Information==
 +One part of this section that made more sense after going to class was the actual formulation of the problem. This section had pretty much two pages worth of information on how we construct the exact problem that we are going to use. However, the slides in class made it short and sweet. We want to minimize the following equation E + cL. Where E is the error, c is the cost constant, and L is the number of lines.
 +
 +On a scale of 1 to 10, the readability of this section was a 7. Some of the explanations seemed a little circular and long-winded. I preferred the layout of the problem to be short and sweet like the slides did in class. Besides that though, the actual algorithms were fairly easy to comprehend and analyze for runtime.
courses/cs211/winter2018/journals/nasona/chapter6.1521978302.txt.gz · Last modified: by nasona
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0