Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
courses:cs211:winter2014:journals:deirdre:chapter5 [2014/03/11 23:42] tobindcourses:cs211:winter2014:journals:deirdre:chapter5 [2014/03/12 00:47] (current) – [5.4 - Finding the Closest Pair of Points] tobind
Line 91: Line 91:
 **Summary of the Algorithm** **Summary of the Algorithm**
   Closest-Pair(P)   Closest-Pair(P)
 +     Construct Px and Py        (O(n log n) time) 
 +     (p0*, p1*) = Clostest-Pair-Rec(Px, Py)
 +  
 +  Closest-Pair-Rec(Px, Py)
 +     If |P| ≤ 3 then
 +        find closest pair by measuring all pairwise distances
 +     Endif
 +     Construct Qx, Qy, Rx, Ry           (O(n) time)
 +     (q0*, q1*) = Closest-Pair-Rec(Qx, Qy)
 +     (r0*, r1*) = Closest-Pair-Rec(Rx, Ry)
 +     
 +     δ = min(d(q0*, q1*), d(r0*, r1*))
 +     x* =  maximum x-coordinate of a point in set Q
 +     L = {(x,y) : x = x*}
 +     S = points in P within distance δ of L
 +     
 +     Construct Sy                      (O(n) time)
 +     For each point s ∈ Sy, compute distance from s to each of next 15 points in Sy
 +         Let s, s' be pair achieving minimum of these distances               (O(n) time)
 +     
 +     If d(s,s') < δ then 
 +         Return (s, s')
 +     Else if d(q0*, q1*) < d(r0*, r1*) then
 +         Return(q0*, q1*)
 +     Else
 +         Return (r0*, r1*)
 +     Endif
 +
 +**Analyzing the Algorithm**
 +The algorithm correctly outputs a closest pair of points in P.
 +
 +The running time of the algorithm is //O(n //log// n)//. 
 +
 +8/10.
 +
 +====== 5.5 - Integer Multiplication ======     
 +**The Problem**
 +Consider the  multiplication of two integers. Even though this seems trivial, the way that we are taught to compute it in school takes //O(n^2)// running time. But we can improve on that!
 +
 +**Designing the Algorithm**
 +Let's assume we're in base-2 (but it doesn't matter) and start by writing //x// as //x1 * 2^(n/2) + x0//. In other words, //x1// corresponds to the "high-order" //n///2 bits and //x0// corresponds to the "low-order" //n///2 bits. Similarly, we write //y = y1 * 2^(n/2) + y0//. Thus, we have //xy = (x1 * 2^(n/2) + x0)(y = y1 * 2^(n/2) + y0) = x1y1 * 2^n + x1y0 _ x0y1)*2^(n/2) + x0y0//.
 +
 +This reduces the problem of solving a single //n//-bit instance to the problem of solving four //n///2-bit instances. -> recursviely compute the results for these four instances and then combine them using the above equation. The combining requires a constant number of additions of //O(n)//-bit numbers so it takes time //O(n)//; thus, the running time is bounded by the recurrence //T(n) ≤ 4T(n/2) + cn// for a constant //c//. ...good enough? NOPE. 
 +
 +Try using three. This leads to //T(n) ≤ O(n^log2(q))//. The trick is to consider the result //(x1+x0)(y1+y0) = x1y1 + x1y0 + x0y1 + x0y0//
 +
 +   Recursive-Multiply(x,y):
 +   Write x = x1 * 2^(n/2) + x0
 +         y = y1 * 2^(n/2) + y0
 +   Compute x1+x0 and y1+y0
 +   p = Recursive-Multiply(x1+x0, y1+y0)
 +   x1y1 = Recursive-Multiply(x1,y1)
 +   x0y0 = Recursive-Multiply(x0,y0)
 +   Return x1y1 * 2^n + (p - x1y1 - x0y0) * 2^(n/2) + x0y0
 +
 +**Analyzing the Algorithm**
 +Given two //n//-bit numbers, the alg performs a constant number of additions on //O(n)//-bit numbers, in addition to the three recursive calls. 
 +
 +The running time satisfies: //T(n) ≤ 3T(n/2) + cn//.
 +
 +The running time of Recursive-Multiply on two //n//-bit factors is //O(n^log2(3)) = O(n^1.59)//
 +
 +
 +
 +
 +
 +
 +
 +
 +
 +
 +
 +
 +
 +   
 +   
 +   
 +   
 +   
 +   
        
  
courses/cs211/winter2014/journals/deirdre/chapter5.1394581361.txt.gz · Last modified: 2014/03/11 23:42 by tobind
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0