Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2014:journals:kinsey:home [2014/03/10 04:28] schellkcourses:cs211:winter2014:journals:kinsey:home [2014/04/03 15:32] (current) – [7.7] schellk
Line 159: Line 159:
 Section 3 introduces a new problem: rankings and collaborative filtering. If you were to ask a group of people to rank 5 movies from best to worst, how can you tell which two peoples rankings are similar and which differ? The book tackles this problem with a divide and conquer approach. In order to determine how different two people’s ranking list is you can count the number of inversions it takes to make Sally’s list look like Jane’s. The divide and conquer approach takes both Sally’s and Jane’s list, recursively divides them in half and sorts the two halves into a larger whole meanwhile counting the inversions. This is the merge-and-count part of the problem, an algorithm that takes linear time. The algorithm is written out on page 224, but it basically points to the current position in Sally’s half and Jane’s half, compares the two values, if Jane’s element is smaller then append it to the sorted list and look at her next element in comparison to Sally’s. If Sally’s is smaller, append Sally’s element to the sorted list and add the Count variable keeping track of the amount of inversions by rest of the elements in Jane’s list. Keep doing this while both lists are not empty. When a list becomes empty, add the rest of the elements to the sorted list and return Count and the sorted list. Merge and Count is executed in a larger recursive function Sort and Count(L) that simply facilitates the dividing of the S and J lists into halves and returning the total count and the sorted list. Overall this takes O(nlogn) time.  Section 3 introduces a new problem: rankings and collaborative filtering. If you were to ask a group of people to rank 5 movies from best to worst, how can you tell which two peoples rankings are similar and which differ? The book tackles this problem with a divide and conquer approach. In order to determine how different two people’s ranking list is you can count the number of inversions it takes to make Sally’s list look like Jane’s. The divide and conquer approach takes both Sally’s and Jane’s list, recursively divides them in half and sorts the two halves into a larger whole meanwhile counting the inversions. This is the merge-and-count part of the problem, an algorithm that takes linear time. The algorithm is written out on page 224, but it basically points to the current position in Sally’s half and Jane’s half, compares the two values, if Jane’s element is smaller then append it to the sorted list and look at her next element in comparison to Sally’s. If Sally’s is smaller, append Sally’s element to the sorted list and add the Count variable keeping track of the amount of inversions by rest of the elements in Jane’s list. Keep doing this while both lists are not empty. When a list becomes empty, add the rest of the elements to the sorted list and return Count and the sorted list. Merge and Count is executed in a larger recursive function Sort and Count(L) that simply facilitates the dividing of the S and J lists into halves and returning the total count and the sorted list. Overall this takes O(nlogn) time. 
 Readability 10 Readability 10
 +
 +====== 5.4 Finding the Closest Pair of Points ======
 +Given n points in the plane, find the pair that is closest together. One's first reaction is to go through and compare all of the points, which is a quadratic solution. However, through divide and conquer you can find the closest pair of points in O(nlogn) time. Consider a one dimensional line. First you sort the points on that line, then you walk through the sorted list and compute the distance between the current point and the one that comes after it O(nlogn). However, to approach a 2-d problem you need to include both x and y. So, first you take on the left half of the problem, find it's closest pair, then take the right half of the problem and do the same. This leads to an nlogn solution. However, it excludes counting points that cross over the left and right sides. So, in order to consider these take the smallest distance that was found between the left half and right half and only consider that distance from the center partition in both y and -y directions. Page 228 proves why you only need to consider a distance of delta on both sides of L. In english, it means that since we know the smallest distance on the left half and right half is delta, we don't need to consider any two points who are further away from the center then that distance because we know that they won't be the minimum distance. From there the book illustrates partitioning the problem into delta/2 x delta/2 boxes. These boxes contain one point (because if they contained more then one point then their distance would be smaller than delta, thus would be the delta), therefore we only need to consider 15 points from the point or box selected. However, we showed in class that you only need to consider 7. To implement the algorithm we order the boxes or points by their y-coordinate and can do so because we have insight that we didn't have during recursion – delta. Furthermore if there are two pairs who's distance is less then delta, then that pair will be returned. If not, delta will be returned. The complete algorithm is on page 230. To prove its correctness the book uses proof by induction, however it seemed like the proof wasn't complete when I was reading it. Overall, the running time of the algorithm is nlogn not n squared! 
 +BOOM. 
 +Readability: 10
 +
 +====== Chapter 6: Dynamic Programming ======
 +{{:courses:cs211:winter2014:journals:kinsey:screen_shot_2014-03-25_at_6.38.21_pm.png|}}
 +Readability: 10
 +====== 6.1 Weighted Interval Scheduling a Recursive Procedure ======
 +{{:courses:cs211:winter2014:journals:kinsey:screen_shot_2014-03-25_at_6.38.27_pm.png|}}
 +{{:courses:cs211:winter2014:journals:kinsey:screen_shot_2014-03-25_at_6.38.36_pm.png|}}
 +Readability: 9
 +====== 6.2 Principles of Dynamic Programming: Memoization or Iteration over Subproblems ======
 +{{:courses:cs211:winter2014:journals:kinsey:screen_shot_2014-03-25_at_6.38.42_pm.png|}}
 +Readability: 9
 +====== 6.3 Segmented Least Squares: Multiway Choices======
 +{{:courses:cs211:winter2014:journals:kinsey:screen_shot_2014-03-25_at_6.38.48_pm.png|}}
 +{{:courses:cs211:winter2014:journals:kinsey:screen_shot_2014-03-25_at_6.38.54_pm.png|}}
 +
 +Readability: 8
 +====== 6.4 Subset Sums and Knapsacks: Adding a Variable ======
 +{{:courses:cs211:winter2014:journals:kinsey:screen_shot_2014-03-25_at_6.39.01_pm.png|}}
 +{{:courses:cs211:winter2014:journals:kinsey:screen_shot_2014-03-25_at_6.39.08_pm.png|}}
 +Readability: 7
 +
 +====== 7.1 Maximum Flow Problem and the Ford-Fulkerson Algorithm ======
 +{{:courses:cs211:winter2014:journals:kinsey:screen_shot_2014-04-01_at_9.38.12_am.png |}}
 +Readability: 7.2
 +====== 7.2 Maximum Flows and Minimum Cuts in a Network ======
 +{{:courses:cs211:winter2014:journals:kinsey:screen_shot_2014-04-01_at_10.20.54_am.png|}}
 +Readability: 7
 +
 +====== 7.5 ======
 +{{:courses:cs211:winter2014:journals:kinsey:screenshot_from_2014-04-01_21_43_21.png?200|}}
 +Readability: 3
 +
 +====== 7.7 ======
 +{{:courses:cs211:winter2014:journals:kinsey:s1.jpg|}}
 +Readability: 2
 +
  
  
  
courses/cs211/winter2014/journals/kinsey/home.1394425689.txt.gz · Last modified: 2014/03/10 04:28 by schellk
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0