Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
courses:cs211:winter2014:journals:onye:home [2014/03/03 23:30] – [Section 4.5] ekentaocourses:cs211:winter2014:journals:onye:home [2014/03/27 03:53] (current) – [Section 5.5] ekentao
Line 104: Line 104:
  
 Multiplying integers. Separate the values into a large end and small end like n = x1*2^(n/2) + x0. Expand the multiplication and rmeember to use (x1 + x0)(y1 + y0) = x1*y1 + x0*y1 + x1*y0 + x0*y0 so you can calculate three multiplications (x1*y1, x0*y0, and (x1+x0)(y1+y0)) instead of 4 (x1*y1,x0*y0,x1*y0,x0*y1), which would result in a quadratic algorithm just like the brute force solution (which can be seen for the general form of creating a recursive procedure whose running time satisfies the recurrence relation T(n) = q*T(n/2) + cn when q > 2)  Multiplying integers. Separate the values into a large end and small end like n = x1*2^(n/2) + x0. Expand the multiplication and rmeember to use (x1 + x0)(y1 + y0) = x1*y1 + x0*y1 + x1*y0 + x0*y0 so you can calculate three multiplications (x1*y1, x0*y0, and (x1+x0)(y1+y0)) instead of 4 (x1*y1,x0*y0,x1*y0,x0*y1), which would result in a quadratic algorithm just like the brute force solution (which can be seen for the general form of creating a recursive procedure whose running time satisfies the recurrence relation T(n) = q*T(n/2) + cn when q > 2) 
 +
 +==== Section 6.1 ====
 +
 +Dynamic program is similar to divide and conquer because we solve a larger problem by solving smaller problems and combining the solutions. It differs due to the fact that the computations done on certain subproblems "overlap", or can inform the computations of other solutions. Because this, the process of memoization, where computed results are stored in an array, can be used to create efficient algorithms. 
 +
 +==== Section 6.2 ====
 +The process of memoization only computes the value of the solution. In order to obtain the actual solution one must look at the method used to combine the solutions. In the case of the interval scheduling problem, the value of the vj + M[p(j)] is compared to the value of M[j-1] and that value is stored in M[j]. Thus, you can determine the solution by computing these values and seeing which is stored in M[j]. If it is the first, the solution includes j, if it is the second, then it does not include vj. 
 +
 +Once the p function is computed the algorithm runs in O(n) time, but computing the p function takes n log n so the total running time is O(n log n) time. 
 +
 +==== Section 6.3 ====
 +Segmented least squares is a problem that can be solved efficiently using dynamic programming practices. The problem is an extension of the problem of finding the line of best fit through a set of lines. In this problem, one can use several lines, but the there is a cost function that cL + E, where L is the number of lines used and E is the total square error of all the lines. 
 +
 +A direct implementation of the dynamic programming scheme results in an O(n^3) running time but it is possible to make the computation of all the square erros more efficient to bring the total cost down to O(n^2)
 +
 +==== Section 6.4 ====
 +
 +This version of the knapsack problem is as follows: You are given a list of N items each with "weights" and "values". You have a specified weight limit, and must find a set that maximizes total value while coming under the weight limit. This case differs because the number of variables is two instead of 1, resulting in a 2-D array. Still, dynamic programming applies and the algorithm that results runs in O(NW) time 
 +
courses/cs211/winter2014/journals/onye/home.1393889441.txt.gz · Last modified: 2014/03/03 23:30 by ekentao
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0