Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
courses:cs211:winter2012:journals:mike:home [2012/03/01 04:46] – [Chapter 5] whitem12 | courses:cs211:winter2012:journals:mike:home [2012/03/28 03:26] – [Memoization] whitem12 | ||
---|---|---|---|
Line 52: | Line 52: | ||
Divide and Conquer | Divide and Conquer | ||
====Section 1: Mergesort==== | ====Section 1: Mergesort==== | ||
+ | Idea: Divide a list in half until you have sorted lists (all single entries) and then merge them back together as a sorted list such that the final list is a sorted version of the initial list. Takes n steps to break down and n*log(n) steps to merge them back together. | ||
=== Approaches === | === Approaches === | ||
+ | Two basic ways: | ||
==Unrolling the Mergesort Recurrence== | ==Unrolling the Mergesort Recurrence== | ||
+ | Analyze the first few layers and generalize from there to unroll how the recursion takes place. | ||
==Substituting a Solution into the Mergesort Recurrence== | ==Substituting a Solution into the Mergesort Recurrence== | ||
+ | Guess what the solution is and then check it (this is good for merge sort since we know it takes n*log(n). | ||
==Partial Substitution== | ==Partial Substitution== | ||
+ | So if you don't know the constant then you can assume it exists and then check to make sure that it turns out right in the end and then work it back up to find the actual constant involved. | ||
+ | ===== Chapter 6 ===== | ||
+ | Dynamic Programing - Break down into sub problems, then build the problems back up to solve the large problem. | ||
+ | ==== Weighted interval scheduling ==== | ||
+ | Recursive algorithm: this is the first step in creating a dynamic algorithm. | ||
+ | === How to do it === | ||
+ | - label the requests, and sort them in order of finishtime | ||
+ | - Think of the optimal solution, either the last item is in it or isn't in it | ||
+ | - Find the optimal solution involving the last one (so ignore all that don't finish before it starts) and the optimal solution that doesn' | ||
+ | This is a problem because the call stack grows at very very fast rate, we want to avoid this, so we introduce the dynamic part. | ||
+ | |||
+ | By memorizing the result of a previous calculation, | ||
+ | |||
+ | You can use the Array of calculation choices to work your way back to find the optimal solution. | ||
+ | |||
+ | ==== Memoization ==== | ||
+ | We want to make it more clear how dynamic programing is done right rather than hodgepodged on top of a recursive algoritm. | ||
+ | M[0] = 0 | ||
+ | then if v_i is the value of the i'th job, and p(i) is the latest job before job i that can be done if job i is chosen, then for all i compute | ||
+ | M[i] = max(v_i + M[p(i)], M[i-1]) | ||
+ | then the optimal amount will be the max of M. | ||
+ | |||
+ | This is a simple straight forward way to think of the previous solution. | ||
+ | |||
+ | === When Dynamic programming should be used === | ||
+ | - There are only a polynomial amount of subproblems. | ||
+ | - Original problem is easy to get from subproblems | ||
+ | - There is a natural ordering to the subproblems. | ||
+ | |||
+ | If working backwards gets you the solution its probably going to be the way to go. | ||
+ | |||
+ | ==== Least Squares ==== | ||