Differences
This shows you the differences between two versions of the page.
| Next revision | Previous revision | ||
| courses:cs211:winter2018:journals:holmesr:section_6.0 [2018/03/27 16:01] – created holmesr | courses:cs211:winter2018:journals:holmesr:section_6.0 [2018/03/27 16:02] (current) – holmesr | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ====== Chapter 6: Dynamic Programming ====== | ====== Chapter 6: Dynamic Programming ====== | ||
| + | ===== Chapter 6 Front Matter and Section 6.1: Weighted Interval Scheduling, a Recursive Approach ===== | ||
| + | |||
| + | The basic idea behind dynamic programming is similar to that of divide and conquer and opposite to that of greedy: dynamic programming divides the problem into subproblems and then takes the solutions to those to create the solution to the larger problem. This leads us to think that the dynamic programming approach may reach up to the brute force search time, however we will never actually need to examine every solution to our problem explicitly. | ||
| + | |||
| + | ===== 6.1 a Recursive Approach to Weighted Interval Scheduling ===== | ||
| + | |||
| + | First, some notation to assist discussion. We will call p(j) the largest index i<j such that i and j are disjoint. This is to say that i is the leftmost interval that ends before j begins. We can also consider an optimal scheduling, the precise contents of which are unknown, that we will call O. We know that no interval between p(n) and n can be included in O if n is included in O because they overlap n. Additionally, | ||
| + | |||
| + | Using that principle we can develop the statement that a an interval j should only be added to the solution if the sum of it and its optimal solution for p(j) is greater than the optimal solution of j-1. This statement obviously lends itself to recursion, since one must calculate the optimal set of intervals for p(j) and for j-1. Another thing that is easy to see from this includes the algorithm' | ||
| + | |||
| + | The redundant recursive calling can be solved by memoization, | ||
| + | |||
| + | ===== 6.2 Principles of Dynamic Programming: | ||
| + | |||
| + | The iterative version of the algorithm works by directly computing the entries in M rather than relying on the memoized recursion present in the recursive version. This algorithm works by iterating though the n items and computing each ones value, which is then stored directly in the array. It is easy to see that the algorithm then has linear running time since it will execute a constant time operation on each of its n passes through the loop. | ||
| + | |||
| + | From here we are able to develop an outline of dynamic programming. There are three properties that should be true of a problem in order to guide or development of a dynamic programming approach to solving the problem. First, there must be only a polynomial number of subproblems. Additionally, | ||
| + | |||
| + | Sometimes, it can be a challenge to determine whether it is more useful to first reason about the structure of the optimal solution or whether it is more useful to come up with subproblems that seem natural and then figuring a recurrence which links them. In this way, dynamic programming can be reminiscent of a chicken-and-the egg reasoning puzzle. | ||
| + | |||
| + | ===== 6.3 Segmented Least Squares: Multi-way Choices ===== | ||
| + | |||
| + | What we will call the error of a line of best fit is the sum of each point' | ||
| + | |||
| + | This dilemma leads us to explore the issue of change detection, which is helps us identify a few points at which a change occurs. Such a technique will help to tell us how many linear approximations to use. To assist us in the solving of this problem we will define a penalty of a partition as the sum of the number of segments times a given multiplier and the error of the optimal line through a segment of points. We want to minimize the penalty. As we increase the number of segments into which the data is partitioned, | ||
| + | |||
| + | To design the algorithm, we can begin by thinking that the final point in the set of points will only be in one segment and that segment must begin at a point. If we can figure out what these two points are, then we will have found the composition of the final segment and we can remove those points from the data. Letting e< | ||
| + | |||
| + | To be completely honest, I am not completely sure what statement 6.7 in the book is saying, so I will look back over that at a later time.. | ||
| + | |||
| + | The algorithm uses O(n< | ||
