Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
courses:cs211:winter2012:journals:mike:home [2012/03/28 03:08] – whitem12 | courses:cs211:winter2012:journals:mike:home [2012/03/28 03:35] – [Least Squares] whitem12 | ||
---|---|---|---|
Line 63: | Line 63: | ||
===== Chapter 6 ===== | ===== Chapter 6 ===== | ||
+ | Dynamic Programing - Break down into sub problems, then build the problems back up to solve the large problem. | ||
+ | ==== Weighted interval scheduling ==== | ||
+ | Recursive algorithm: this is the first step in creating a dynamic algorithm. | ||
+ | === How to do it === | ||
+ | - label the requests, and sort them in order of finishtime | ||
+ | - Think of the optimal solution, either the last item is in it or isn't in it | ||
+ | - Find the optimal solution involving the last one (so ignore all that don't finish before it starts) and the optimal solution that doesn' | ||
+ | |||
+ | This is a problem because the call stack grows at very very fast rate, we want to avoid this, so we introduce the dynamic part. | ||
+ | |||
+ | By memorizing the result of a previous calculation, | ||
+ | |||
+ | You can use the Array of calculation choices to work your way back to find the optimal solution. | ||
+ | |||
+ | ==== Memoization ==== | ||
+ | We want to make it more clear how dynamic programing is done right rather than hodgepodged on top of a recursive algoritm. | ||
+ | M[0] = 0 | ||
+ | then if v_i is the value of the i'th job, and p(i) is the latest job before job i that can be done if job i is chosen, then for all i compute | ||
+ | M[i] = max(v_i + M[p(i)], M[i-1]) | ||
+ | then the optimal amount will be the max of M. | ||
+ | |||
+ | This is a simple straight forward way to think of the previous solution. | ||
+ | |||
+ | === When Dynamic programming should be used === | ||
+ | - There are only a polynomial amount of subproblems. | ||
+ | - Original problem is easy to get from subproblems | ||
+ | - There is a natural ordering to the subproblems. | ||
+ | |||
+ | If working backwards gets you the solution its probably going to be the way to go. | ||
+ | |||
+ | ==== Least Squares ==== | ||
+ | Finding the best fit line. Want to minimize the square of the deviations of data from a line. Data is given, find the line. There isn't always one line, sometimes multiple lines, but we want creating a new line to have a penalty, otherwise there would be a line from each data point to another datapoint and the SSE (sum of squares error) = 0. | ||
+ | |||
+ | This can be made into a dynamic function by working backwards, you find the SSE for the last three points (otherwise it's 0 so it's easy) and then compair whether making a new partition is going to be more effective in minimizing the SSE + penalty or if it's more useful to just adjust the original line. | ||
+ | |||
+ | This total adds up to O(n^2) | ||