Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
courses:cs211:winter2018:journals:goldm:ch6 [2018/03/25 22:59] – created goldmcourses:cs211:winter2018:journals:goldm:ch6 [2018/03/26 23:30] (current) goldm
Line 1: Line 1:
-bladh+====== Chapter 6 ====== 
 + 
 + 
 +===== 6.1: Weighted Interval Scheduling: A Recursive Procedure ===== 
 +We have previously discussed Interval Scheduling with all weights equal to 1, but this section delves into a more general case of the problem where values can vary. Unfortunately, the greedy algorithm we previously discussed in the unweighted case will no longer be optimal for what we are doing. Thus, we need a new algorithm. In fact, we must move away from greedy algorithms all together, as one may have guessed, we now must use dynamic programming. What we ultimately use involves recursion and memoization. Using memoization allows us to save values after calculating them once so that they may be referred to easily in the future without having to recalculate them over and over again. This is especially useful with all of the recursive calls to help lower our runtime complexity. Without memoization, the algorithm would run in exponential time. After introducing the memoization, the runtime goes down from exponential all of the way to O(n), which is obviously a tremendous improvement. This goes to show the power of memoization in the right situation. 
 + 
 +This section was relatively interesting as dynamic programming is something I saw in Theory of Computation that I really did not understand at the time, probably due to the difficulty of the problem is used on. Everything in this section, however, makes perfect sense. Overall, I give it  a 6/10. 
 + 
 +===== 6.2: Principles of Dynamic Programming: Memoization or Iteration over Subproblems ===== 
 +This section builds off of the specific examples of the previous section to reach a more general understanding of the dynamic programming process. One important distinction it makes is that we will be iterating over subproblems instead of computing solutions recursively. The section then goes on to redefine the previous section's algorithm in terms of the more typical ideas of dynamic programming to serve as a building block. The main difference between the two versions of the algorithm is the previously mentioned concept of iterating over sub problems. The section goes to note that every time it uses this method, there is an equivalent solution that works based off of memoization. The properties we need satisfied to use dynamic programming are: 1. There are only a polynomial number of subproblems. 2. The solution to the original problem can be easily computed from the solutions to the subproblems. 3. There is a natural ordering of subproblems from smallest to largest with an easy-to-compute recurrence, which allows one to determine the solution to a subproblem from the solutions of other, smaller subproblems. 
 + 
 +This section better begins to explain exactly what dynamic programming is as opposed to just giving a single example of how to do a problem in a specific way. It gets a 6.5/10. 
 + 
 +===== 6.3: Segmented Least Squares: Multi-way Choices ===== 
 +This section begins by discussing an important distinction about future problems and previous problems. The previous problems dealt with binary choices, either an item belonged in the final solution or it did not. This section deals with the idea that at each step we may have a polynomial number of choices. This is a multi-way choice. It suggests, however, that going from binary to multi-way is a smooth transition. This brings us to the problem the chapter is about, the Segmented Least Squares Problem. This can be seen as finding the appropriate number of lines of best fit needed to approximate the points on a graph and minimize error. This problem is known as change detection. Given a sequence, we want to find certain points where a discrete change takes place. In the context of the aforementioned problem, this is going from one linear approximation to another. In order to solve this problem, we associate a penalty with adding another line to the approximation. Thus, we wish to find a partition with the least downside. We thus wish to optimize the number of segments so that we are not penalized too harshly for making too many partitions, but we still get to greatly reduce the total error associated with the points of the graph being above/below the line(s) of best fit. While we love seeing linear algorithms, due to the complexity of the problem, we end up with a quadratic runtime. This runtime, while slightly higher than we are used to, ends up being practical in the grand scheme of things. 
 + 
 +This section kind of dragged on for me. I typically like the specific problem sections less. I give this a 5/10.
courses/cs211/winter2018/journals/goldm/ch6.1522018740.txt.gz · Last modified: by goldm
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0