Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
courses:cs211:winter2014:journals:kinsey:home [2014/03/04 20:46] – [5.1 : A First Recurrence: The Mergesort Algorithm] schellk | courses:cs211:winter2014:journals:kinsey:home [2014/04/03 15:32] (current) – [7.7] schellk | ||
---|---|---|---|
Line 148: | Line 148: | ||
Unordered List ItemCan also start out more vague without assumed that the log base is 2 but rather a constant b and the constant c is some constant k. | Unordered List ItemCan also start out more vague without assumed that the log base is 2 but rather a constant b and the constant c is some constant k. | ||
+ | |||
+ | Readability: | ||
+ | |||
+ | ====== 5.2 Further Recurrence Relations ====== | ||
+ | Previously in chapter 5, we had only dealt with recurrence problems in which you divided the problem by half into 2 sub problems. This chapter deals with the cases where you might recursively divide a problem in half into one sub problem or more than 2 sub problems. For cases in which you recursively divide a problem in to more than 2 sub problems are O(nlog q) operations where q is the amount of sub problems. The book uses two methods to show the algorithms runtime. Through unrolling, first you analyze the beginning layers and notice that the total work per level is increasing as you keep going through the recursion. There are still log n levels however the cost is increases each level as it is a geometric sum. Therefore if you divide a problem into 3 sub problems the runtime is O(nlog 3) which is less then dividing a problem into 4 sub problems O(nlog 4). This makes sense, as there is more work with the additional recursive calls. This runtime can also be found through partial substitution, | ||
+ | |||
+ | Readability: | ||
+ | |||
+ | ====== 5.3 Counting Inversions ====== | ||
+ | Section 3 introduces a new problem: rankings and collaborative filtering. If you were to ask a group of people to rank 5 movies from best to worst, how can you tell which two peoples rankings are similar and which differ? The book tackles this problem with a divide and conquer approach. In order to determine how different two people’s ranking list is you can count the number of inversions it takes to make Sally’s list look like Jane’s. The divide and conquer approach takes both Sally’s and Jane’s list, recursively divides them in half and sorts the two halves into a larger whole meanwhile counting the inversions. This is the merge-and-count part of the problem, an algorithm that takes linear time. The algorithm is written out on page 224, but it basically points to the current position in Sally’s half and Jane’s half, compares the two values, if Jane’s element is smaller then append it to the sorted list and look at her next element in comparison to Sally’s. If Sally’s is smaller, append Sally’s element to the sorted list and add the Count variable keeping track of the amount of inversions by rest of the elements in Jane’s list. Keep doing this while both lists are not empty. When a list becomes empty, add the rest of the elements to the sorted list and return Count and the sorted list. Merge and Count is executed in a larger recursive function Sort and Count(L) that simply facilitates the dividing of the S and J lists into halves and returning the total count and the sorted list. Overall this takes O(nlogn) time. | ||
+ | Readability 10 | ||
+ | |||
+ | ====== 5.4 Finding the Closest Pair of Points ====== | ||
+ | Given n points in the plane, find the pair that is closest together. One's first reaction is to go through and compare all of the points, which is a quadratic solution. However, through divide and conquer you can find the closest pair of points in O(nlogn) time. Consider a one dimensional line. First you sort the points on that line, then you walk through the sorted list and compute the distance between the current point and the one that comes after it O(nlogn). However, to approach a 2-d problem you need to include both x and y. So, first you take on the left half of the problem, find it's closest pair, then take the right half of the problem and do the same. This leads to an nlogn solution. However, it excludes counting points that cross over the left and right sides. So, in order to consider these take the smallest distance that was found between the left half and right half and only consider that distance from the center partition in both y and -y directions. Page 228 proves why you only need to consider a distance of delta on both sides of L. In english, it means that since we know the smallest distance on the left half and right half is delta, we don't need to consider any two points who are further away from the center then that distance because we know that they won't be the minimum distance. From there the book illustrates partitioning the problem into delta/2 x delta/2 boxes. These boxes contain one point (because if they contained more then one point then their distance would be smaller than delta, thus would be the delta), therefore we only need to consider 15 points from the point or box selected. However, we showed in class that you only need to consider 7. To implement the algorithm we order the boxes or points by their y-coordinate and can do so because we have insight that we didn't have during recursion – delta. Furthermore if there are two pairs who's distance is less then delta, then that pair will be returned. If not, delta will be returned. The complete algorithm is on page 230. To prove its correctness the book uses proof by induction, however it seemed like the proof wasn't complete when I was reading it. Overall, the running time of the algorithm is nlogn not n squared! | ||
+ | BOOM. | ||
+ | Readability: | ||
+ | |||
+ | ====== Chapter 6: Dynamic Programming ====== | ||
+ | {{: | ||
+ | Readability: | ||
+ | ====== 6.1 Weighted Interval Scheduling a Recursive Procedure ====== | ||
+ | {{: | ||
+ | {{: | ||
+ | Readability: | ||
+ | ====== 6.2 Principles of Dynamic Programming: | ||
+ | {{: | ||
+ | Readability: | ||
+ | ====== 6.3 Segmented Least Squares: Multiway Choices====== | ||
+ | {{: | ||
+ | {{: | ||
+ | |||
+ | Readability: | ||
+ | ====== 6.4 Subset Sums and Knapsacks: Adding a Variable ====== | ||
+ | {{: | ||
+ | {{: | ||
+ | Readability: | ||
+ | |||
+ | ====== 7.1 Maximum Flow Problem and the Ford-Fulkerson Algorithm ====== | ||
+ | {{: | ||
+ | Readability: | ||
+ | ====== 7.2 Maximum Flows and Minimum Cuts in a Network ====== | ||
+ | {{: | ||
+ | Readability: | ||
+ | |||
+ | ====== 7.5 ====== | ||
+ | {{: | ||
+ | Readability: | ||
+ | |||
+ | ====== 7.7 ====== | ||
+ | {{: | ||
+ | Readability: | ||