Section 5.4 Section 5.4 looks at the closest pair problem, where the user is trying to find the pair of closest points within a graph. There is an obvious O(n-squared) solution of computing all the distances between neighboring point and finding the max. But there are other algorithms to solve this problem that do not use O(n-squared time). The section proposes a divide and conquer algorithm. Where we find the middle of the graph, draw a line so that half the points lie on one side and half on the other. Then we find the two pairs that are closest on each side in linear time, as well as the two closest points where one point lies on left and one on the right. Lastly, we combine these solutions and find the smallest distances to achieve the overall solution. This algorithm already has a lower efficiency of O(nlogn) time at least. To set up the algorithm recursively, we first define Q to be the set of poitns int eh first positions of the list Px, and R to be the set of points in the final positions of the list Px. Next we determine the closest pair of points in Q (qo*, q1*) by looking at Qx and Qy, and determime the closest pairs of points in R (ro*, r1*) by looking at Rx and Ry. The next part of this algorithm is to combine the solutions. First we let delta be the minimum distance of d(qo*, q1*) and d(ro*, r1*), then we must ask if there is some pair (q, r) such that its distance is less than delta? If not, then we are done and have found the closest pair, otherwise we must look points closest to L to find the pair whose distance is less than delta. To find this pair we can restrict our search to the points only within delta or less distance of L. Once this is complete and we have sorted the points in S by their y coordinates (Sy), then we can say that there exists a q and r for which d(q, r) < delta, iff there exists s, (s' also within s) for which d(s, s') < delta. Overall the run time is O(nlogn) because sorting the points by their x and y coordinates takes nlogn time, and the rest of the algorithms functions only takes linear time, thus the upperbound of the problem is O(nlogn).

Section 5.5 Section 5.5 looks at integer multiplication, and how divide and conquer can be used to solve it. The algorithm students learn in elementary school (a.k.a. adding up the partial solutions) has a run time of O(n-aquared) becuase it takes O(n) to compute the partial product and there are n partial products. A better solution is found in this section by breaking down the product into further partail sums. One option is to recursively compute the results for four n/2 bit instances, and then combining them using the euation: x1y1 * 2(to the n) + (x1y0 + x0y1) * 2(to the n/2) + x0y0. Overall this approach takes T(n)⇐ 4T(n/2) + cn, for some constant c, which unfortunately also takes O(n-squared) time. Therefore, to improve the run time of this algorithm we need to decrease the number of recursive calls from 4 to 3, which ultimately produces a run time less than O(n-squared). To decrease the recursions from 4 to 3, we must solve each part of the equation separately and then recursively combine them. By creating this Reverse-Multiply algorithm, we now have a 3-way branching recursion to solve a multiplication with a running time that satifies T(n) ⇐ 3T(n/2) + cn.

courses/cs211/winter2011/journals/ashley/chapter_5.txt · Last modified: 2011/03/09 17:05 by leinwebera
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0