Section 5.2 This section looks at divide-and-conquer algorithms that create recursive calls on q subproblems of size n/2 and then combine the results in O(n) times. This resembles the recurrence Mergesort which has q = 2 recursive calls, but we might also see q > 2 or q=1 recursive calls. To represent the running time for these algorithms we say T(n) ⇐ qT(n/2) + cn when n> 2, and T(2) ⇐ c. The book outlines the unrolling of the problem where q > 2 on pages 215-17 and shows that for any function T(.) satisfying 5.3 with q > 2 is bounded by O(n^log¬(sub2)q) ⇒ the running time is more than linear. The book does the same unrolling for q =1 on pages 218-219 to show that any function T(.) satisfying (5.3) with q = 1 is bounded by O(n). This is the result of the algorithm performing n levels of recursion with an overall running time of n. We can maintain a linear running time because a geometric series (not exactly sure what this term means though…the problem being solved or the resulting running time?) with a decaying exponent has half the work performed by the algorithm at the top level of recursion. In summary, when q = 1, the running time is linear, when q = 2 the running time is O(n log n), and when q > 2 there’s a polynomial bound with an exponent larger than 1 that grows with q. Running times depend on where most of the work is done. When q =1 the work is done at the top level. When q > 2 the work is done in constant-sized subproblems at the bottom of the recursion. When q = 2 the exact same amount of work is done at each level for O(n log n) recursion. Finally, this section looks at a recurrence that contrasts mergesort, in which input is divided into 2 equal parts, which we solve, and combine in quadratic time. In summary, for come constant c, T(n) ⇐ 2T(n/2) + cn^2 when n > 2, and T(2) ⇐ c.

Section 5.3 In this section we see how a variant of mergesort can be used to solve a non-sorting problem, such as analyzing rankings. If we have a sequence of numbers, all of which are distinct, we can figure out how far we are from having the sequence in order. We can do this by counting the number of inversions in O(n log n) time. We divide the sequence into two parts and count the inversions in each half separately and recursively using Merge-and-Count (p. 224) with a run time of O(n). Then we use sort-and-count to sort and count the number of inversions in our list in O(n log n) time (p. 225).

Section 5.4 This section explains how to find the closest pair of points out of n points on a plane in O(n log n) time. For a set of points we can assume that no two points in P have the same x-coordinate or y-coordinate. We use the same divide and conquer approach as in mergesort, finding the closest pair of points in the “left half” and the closest pair in the “right half” and use the results to find the overall solution in O(n log n) running time. Pages 227-229 explain the process for this, and prove that if s, s’ ∈ S have the property that d(s, s’) < δ, then s and s’ are within 15 positions of each other in the sorted list S¬Y. Page 231 proves that the algorithm correctly outputs the closest pair of points in O(n log n).

Section 5.5 This section looks at using divide and conquer to multiply two numbers. If we count a single operation on a pair of bits as one primitive step, in takes O(n) time to compute each partial product and O(n) time O(n) time to combine these into the running sum of all partial products. Since we have n partial products, we get a total running time of O(n2) (algorithm on p. 233).

This section probably rates an 8 in terms of readability. I feel like I understand the material conceptually, but I’m a little uncertain about the equations that explain how we arrive at different running times, particularly those in 5.2.

courses/cs211/winter2012/journals/shannon/entry_7_chapter_5.2-5.5.txt · Last modified: 2012/03/14 02:28 by mcgoverns
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0