Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| courses:cs211:winter2014:journals:deirdre:chapter5 [2014/03/11 23:29] – [5.4 - Finding the Closest Pair of Points] tobind | courses:cs211:winter2014:journals:deirdre:chapter5 [2014/03/12 00:47] (current) – [5.4 - Finding the Closest Pair of Points] tobind | ||
|---|---|---|---|
| Line 87: | Line 87: | ||
| Let //x*// denote the // | Let //x*// denote the // | ||
| - | Then, we can further say: | + | Then, we can further say - There exists //q ∈ Q// and //r ∈ R// for which //d(q,r) < δ// if and only if there exist //s, s'∈ S// for which //d(s, s') < δ//. |
| - | There exists //q ∈ Q// and //r ∈ R// for which //d(q,r) < δ// if and only if there exist //s, s'∈ S// for which //d(s, s') < δ//. | + | |
| + | **Summary of the Algorithm** | ||
| + | Closest-Pair(P) | ||
| + | | ||
| + | (p0*, p1*) = Clostest-Pair-Rec(Px, | ||
| + | |||
| + | Closest-Pair-Rec(Px, | ||
| + | If |P| ≤ 3 then | ||
| + | find closest pair by measuring all pairwise distances | ||
| + | | ||
| + | | ||
| + | (q0*, q1*) = Closest-Pair-Rec(Qx, | ||
| + | (r0*, r1*) = Closest-Pair-Rec(Rx, | ||
| + | |||
| + | δ = min(d(q0*, q1*), d(r0*, r1*)) | ||
| + | x* = maximum x-coordinate of a point in set Q | ||
| + | L = {(x,y) : x = x*} | ||
| + | S = points in P within distance δ of L | ||
| + | |||
| + | | ||
| + | For each point s ∈ Sy, compute distance from s to each of next 15 points in Sy | ||
| + | Let s, s' be pair achieving minimum of these distances | ||
| + | |||
| + | If d(s, | ||
| + | | ||
| + | Else if d(q0*, q1*) < d(r0*, r1*) then | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | |||
| + | **Analyzing the Algorithm** | ||
| + | The algorithm correctly outputs a closest pair of points in P. | ||
| + | |||
| + | The running time of the algorithm is //O(n //log// n)//. | ||
| + | |||
| + | 8/10. | ||
| + | |||
| + | ====== 5.5 - Integer Multiplication ====== | ||
| + | **The Problem** | ||
| + | Consider the multiplication of two integers. Even though this seems trivial, the way that we are taught to compute it in school takes //O(n^2)// running time. But we can improve on that! | ||
| + | |||
| + | **Designing the Algorithm** | ||
| + | Let's assume we're in base-2 (but it doesn' | ||
| + | |||
| + | This reduces the problem of solving a single //n//-bit instance to the problem of solving four //n///2-bit instances. -> recursviely compute the results for these four instances and then combine them using the above equation. The combining requires a constant number of additions of // | ||
| + | |||
| + | Try using three. This leads to //T(n) ≤ O(n^log2(q))// | ||
| + | |||
| + | | ||
| + | Write x = x1 * 2^(n/2) + x0 | ||
| + | y = y1 * 2^(n/2) + y0 | ||
| + | | ||
| + | p = Recursive-Multiply(x1+x0, | ||
| + | x1y1 = Recursive-Multiply(x1, | ||
| + | x0y0 = Recursive-Multiply(x0, | ||
| + | | ||
| + | |||
| + | **Analyzing the Algorithm** | ||
| + | Given two //n//-bit numbers, the alg performs a constant number of additions on // | ||
| + | |||
| + | The running time satisfies: //T(n) ≤ 3T(n/2) + cn//. | ||
| + | |||
| + | The running time of Recursive-Multiply on two //n//-bit factors is // | ||
| + | |||
| + | |||
| + | |||
| + | |||
| + | |||
| + | |||
| + | |||
| + | |||
| + | |||
| + | |||
| + | |||
| + | |||
| + | |||
| + | |||
| + | |||
| + | |||
| + | |||
| + | |||
| + | |||
