Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| courses:cs211:winter2018:journals:ahmadh:ch5 [2018/03/11 08:14] – ahmadh | courses:cs211:winter2018:journals:ahmadh:ch5 [2018/03/13 02:48] (current) – ahmadh | ||
|---|---|---|---|
| Line 51: | Line 51: | ||
| There are log n levels of recursion, and the total amount of work performed is the sum over all the levels. This sum turns out to be a geometric series, and solving the series yield a running time of O(n^log q) (see page 216 of the textbook). | There are log n levels of recursion, and the total amount of work performed is the sum over all the levels. This sum turns out to be a geometric series, and solving the series yield a running time of O(n^log q) (see page 216 of the textbook). | ||
| + | |||
| + | ==== 5.2.2 The Case of One Subproblem ==== | ||
| + | |||
| + | We can unroll the recurrence for the case of one subproblem as follows: | ||
| + | |||
| + | At the first level of recursion, we have a single problem of size n, which takes time at most cn plus the time spent in all subsequent recursive calls. The next level has one problem of size n/2, which contributes cn/2, and the level after that has one problem of size n/4, which contributes cn/4. As such, the total work per level when q = 1 is actually decreasing as we proceed through the recursion. At an arbitrary level j, we still have just one instance; it has size n/2^j and contributes cn/2^j to the running time. As was the case before, there are log n levels of recursion, and the total amount of work performed is again a geometric series, solving which yields a running time of O(n) (see page 218 of the textbook). | ||
| + | |||
| + | ==== 5.2.3 Comments ==== | ||
| + | |||
| + | The case with just one subproblem was relatively easier to understand and follow. I personally struggled initially in class when we introduced the idea of q > 2 subproblems. The process seemed straightforward, | ||
| + | |||
| + | Other than that, this was a straightforward(ish) section that was just an extension of the previous section--and as such, not super interesting. 6.5ish/10. | ||
| + | |||
| + | ===== 5.3 Counting Inversions ===== | ||
| + | |||
| + | We are given a sequence of n distinct numbers a_1, ..., a_n. We want to define a measure that tells us how far this list is from being in ascending order--the value of the measure should be 0 if a_l < a_2 < . . . < a_n, and should increase as the numbers become more scrambled. | ||
| + | |||
| + | We could quantify this notion by counting the number of inversions. We say that two indices i < j form an inversion if a_i > a_j, i.e. if the two elements a_i and a_j are "out of order." | ||
| + | |||
| + | ==== 5.3.1 Designing and Analyzing the Algorithm ==== | ||
| + | |||
| + | The simplest algorithm to solve this problem could look at every pair of numbers (a_i, a_j) and determine whether they constitute an inversion. This would take O(n^2) time--as such, the algorithm is already pretty efficient. We can, however, seek an even more efficient solution to this problem. | ||
| + | |||
| + | Consider the following algorithm: | ||
| + | |||
| + | Merge-and-Count(A, | ||
| + | | ||
| + | | ||
| + | While both lists are nonempty: | ||
| + | Let a_i and b_j be the elements pointed to by the Current pointer | ||
| + | Append the smaller of these two to the output list | ||
| + | If b_j is the smaller element then | ||
| + | | ||
| + | Endif | ||
| + | Advance the Current pointer in the list from which the smaller element was selected | ||
| + | | ||
| + | Once one list is empty, append the remainder of the other list to the output | ||
| + | | ||
| + | |||
| + | Each iteration of the While loop takes constant time, and in each iteration we add some element to the output that will never be seen again. Thus the number of iterations can be at most the sum of the initial lengths of A and B, and so the total running time is O(n). | ||
| + | |||
| + | |||
| + | We use this Merge-and-Count routine in a recursive procedure that simultaneously sorts and counts the number of inversions in a list L. | ||
| + | |||
| + | Sort-and-Count(L): | ||
| + | If the list has one element then | ||
| + | there are no inversions | ||
| + | Else | ||
| + | Divide the list into two halves: | ||
| + | A contains the first [n/2] elements | ||
| + | B contains the remaining [n/2] elements | ||
| + | (r_A, A) = Sort-and-Count(A) | ||
| + | (r_B, B) = Sort-and-Count(B) | ||
| + | (r, L) = Merge-and-Count(A, | ||
| + | Endif | ||
| + | | ||
| + | |||
| + | Since our Merge-and-Count procedure takes O(n) time, the rimming time T(n) of the full Sort-and-Count procedure satisfies the recurrence (5.1). Therefore, the Sort-and-Count algorithm correctly sorts the input list and counts the number of inversions, and runs in O(n log n) time for a list with n elements. | ||
| + | |||
| + | ==== 5.3.2 Comments ==== | ||
| + | |||
| + | I feel like this was one of the sections where class discussion was very important. Just reading the algorithm alone did not make much sense to mean, and I struggled understanding the key reason why the algorithm returns a sorted list along with the count. It did not seem necessary to me when I was reading this section before--however, | ||
