Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2018:journals:mccaffreyk:home:6 [2018/03/26 02:40] mccaffreykcourses:cs211:winter2018:journals:mccaffreyk:home:6 [2018/03/26 04:18] (current) mccaffreyk
Line 7: Line 7:
 it is more powerful than the other strategies we have tried so far.  it is more powerful than the other strategies we have tried so far. 
  
-=== Section 6.1: Weighted Interval Scheduling: A Recursive Procedure ===+=== Section 6.1: Weighted Interval Scheduling: A Recursive Procedure ===
  
 First, we will apply DP to the weighted interval scheduling problem. This is like the old First, we will apply DP to the weighted interval scheduling problem. This is like the old
Line 27: Line 26:
 combined. Thus, I will rate its ease of reading only 5/10. combined. Thus, I will rate its ease of reading only 5/10.
  
 +=== Section 6.2: Principles of Dynamic Programming: Memoization or Iteration over Sub-problems ===
 +
 +Here, we focus on the iterative way of applying this algorithm. we originally used a recursive style
 +but both achieve the same results for this type of dynamic programming. The iterative algorithm iterates
 +n times, always adding one value to our memoized array m. The values added into m are the maximum number of
 +points if x-i intervals are considered, moving from 0 to n intervals where n is the interval that starts last. 
 +We find the values by computing each optimum solution (overlap vs number of points). I really liked the diagram
 +present in this section. It helped me grasp key concepts which I did not yet understand when reading section 6.1.
 +This gets an 8/10. 
 +
 +
 +=== Section 6.3: Segmented Least Squares: Multi-way Choices ===
  
 +Here we focus on the line of best fit problem. For a given line through a set of points
 +on a 2d graph, we define error value as the sum of the squares of distances between the 
 +line and each point. The interesting thing here is that we allow more than one line. 
 +However, additional lines incur two penalties: segment number is multiplied by a constant
 +and summed with the error of each segment. Clearly, we are attempting to minimize total 
 +error. We can define the constant C as we choose to encourage or discourage additional
 +lines more. The exact algorithm is very abstract. We use iteration somewhat similar
 +to that of section 6.2 to find and memoize the least square error sums in O(n^3) time. Next, we deal with 
 +how many line segments we will need, also with a recursive function. This part takes O(n^2) time. This
 +section was hard for me to an extent similar to 6.1. This is because it gave complex mathematical formulas
 +without explaining them adequately. Further, the algorithms were very abstract forcing me to make assumptions
 +and constantly decipher them. Thus, my score for this section is 5/10. 
  
  
courses/cs211/winter2018/journals/mccaffreyk/home/6.1522032009.txt.gz · Last modified: by mccaffreyk
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0