Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2012:journals:mike:home [2012/03/01 04:44] whitem12courses:cs211:winter2012:journals:mike:home [2012/03/28 04:00] (current) – [Adding a variable] whitem12
Line 50: Line 50:
  
 ===== Chapter 5 ===== ===== Chapter 5 =====
 +Divide and Conquer
 ====Section 1: Mergesort==== ====Section 1: Mergesort====
 +Idea: Divide a list in half until you have sorted lists (all single entries) and then merge them back together as a sorted list such that the final list is a sorted version of the initial list.  Takes n steps to break down and n*log(n) steps to merge them back together.
 +=== Approaches ===
 +Two basic ways:
 +==Unrolling the Mergesort Recurrence==
 +Analyze the first few layers and generalize from there to unroll how the recursion takes place. 
 +==Substituting a Solution into the Mergesort Recurrence==
 +Guess what the solution is and then check it (this is good for merge sort since we know it takes n*log(n).
 +==Partial Substitution==
 +So if you don't know the constant then you can assume it exists and then check to make sure that it turns out right in the end and then work it back up to find the actual constant involved.
  
 +===== Chapter 6 =====
 +Dynamic Programing - Break down into sub problems, then build the problems back up to solve the large problem.
 +==== Weighted interval scheduling ====
 +Recursive algorithm: this is the first step in creating a dynamic algorithm.
 +=== How to do it ===
 +  - label the requests, and sort them in order of finishtime
 +  - Think of the optimal solution, either the last item is in it or isn't in it
 +  - Find the optimal solution involving the last one (so ignore all that don't finish before it starts) and the optimal solution that doesn't involve the last one.  Pick the one that has the larger weight.  Work this back recursively and then it will work its way back up to the beginning when it works its way through the call stack.
  
 +This is a problem because the call stack grows at very very fast rate, we want to avoid this, so we introduce the dynamic part.
  
 +By memorizing the result of a previous calculation, we reduce the number of calculations to O(n) so its goes a lot faster.
  
 +You can use the Array of calculation choices to work your way back to find the optimal solution.  Doesn't add to the complexity but does add to memory needed.
  
 +==== Memoization ====
 +We want to make it more clear how dynamic programing is done right rather than hodgepodged on top of a recursive algoritm.
 +M[0] = 0
 +then if v_i is the value of the i'th job, and p(i) is the latest job before job i that can be done if job i is chosen, then for all i compute
 +M[i] = max(v_i + M[p(i)], M[i-1])
 +then the optimal amount will be the max of M.
 +
 +This is a simple straight forward way to think of the previous solution.
 +
 +=== When Dynamic programming should be used ===
 +  - There are only a polynomial amount of subproblems.
 +  - Original problem is easy to get from subproblems
 +  - There is a natural ordering to the subproblems.
 +
 +If working backwards gets you the solution its probably going to be the way to go.
 +
 +==== Least Squares ====
 +Finding the best fit line.  Want to minimize the square of the deviations of data from a line.  Data is given, find the line.  There isn't always one line, sometimes multiple lines, but we want creating a new line to have a penalty, otherwise there would be a line from each data point to another datapoint and the SSE (sum of squares error) = 0.
 +
 +This can be made into a dynamic function by working backwards, you find the SSE for the last three points (otherwise it's 0 so it's easy) and then compair whether making a new partition is going to be more effective in minimizing the SSE + penalty or if it's more useful to just adjust the original line.  
 +
 +This total adds up to O(n^2)
 +
 +==== Adding a variable ====
 +The subset sums problem is where each event has a value and a weight.  There is a limit on the weight you can take on, so you want to optimize the sum of the values given a maximum weight W.  There is no greedy algorithm to find this solution, so we look for dynamic programming.  We build a 2D table (matrix) of optimal solutions to find the overall optimal solution for all w's less than W.  This way it can be computed quickly, though large amounts of memory may be required.
 +
 +
 +The knapsack problem is one where each element has a value and a weight.  You want to maximize the sum of the values while keeping the weight below a certain amount.  This is solved the same was as the scheduling problem.  
courses/cs211/winter2012/journals/mike/home.1330577078.txt.gz · Last modified: 2012/03/01 04:44 by whitem12
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0