Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| courses:cs211:winter2018:journals:ahmadh:ch5 [2018/03/11 07:19] – ahmadh | courses:cs211:winter2018:journals:ahmadh:ch5 [2018/03/13 02:48] (current) – ahmadh | ||
|---|---|---|---|
| Line 11: | Line 11: | ||
| There are two ways to solve a recurrence relation: | There are two ways to solve a recurrence relation: | ||
| - | * " | + | |
| - | * Start with a guess for the solution, substitute it into the recurrence relation, and check that it works. Formally, one justifies this plugging-in using an argument by induction on n. | + | * Start with a guess for the solution, substitute it into the recurrence relation, and check that it works. Formally, one justifies this plugging-in using an argument by induction on n. |
| ===== 5.1 A First Recurrence: The Mergesort Algorithm ===== | ===== 5.1 A First Recurrence: The Mergesort Algorithm ===== | ||
| Mergesort sorts a given list of numbers by first dividing them into two equal halves, sorting each half separately by recursion, and then combining the results of these recursive calls--in the form of the two sorted halves--using a linear-time algorithm for merging sorted lists (that we discussed in Chapter 2). | Mergesort sorts a given list of numbers by first dividing them into two equal halves, sorting each half separately by recursion, and then combining the results of these recursive calls--in the form of the two sorted halves--using a linear-time algorithm for merging sorted lists (that we discussed in Chapter 2). | ||
| + | |||
| + | ==== 5.1.1 Unrolling the Mergesort Recurrence ==== | ||
| + | |||
| + | We can unroll the recurrence for Mergesort as follows: | ||
| + | |||
| + | At the first level of recursion, we have a single problem of size n, which takes time at most cn plus the time spent in all subsequent recursive calls. At the next level, we have two problems each of size n/2. Each of these takes time at most cn/2, for a total of at most cn, again plus the time in subsequent recursive calls. At the third level, we have four problems each of size n/4, each taking time at most cn/4, for a total of at most cn. Therefore, at level j of the recursion, the number of subproblems has doubled j times, so there are now a total of 2^j subproblems. Each has correspondingly shrunk in size by a factor of two j times, and so each has size n/2^j, and hence each takes time at most cn/2^j. Thus level j contributes a total of at most 2^j(cn/2^j) = cn to the total running time. We know that the number of times the input must be halved in order to reduce its size from n to 2 is log n. So summing the cn work over log n levels of recursion, we get a total running time of O(n log n). | ||
| + | |||
| + | ==== 5.1.2 Substituting a Solution into the Mergesort Recurrence ==== | ||
| + | |||
| + | If we have a guess for the running time that we want to verify, we can do so by plugging it into the recurrence as follows: | ||
| + | |||
| + | Suppose we believe that T(n) ≤ cn log n for all n ≥ 2, and we want to check whether this is indeed true. This clearly holds for n = 2, since in this case cn log n = 2c, and (5.1) explicitly tells us that T(2) ≤ c. Now suppose, by induction, that T(m) ≤ cm log m for all values of m < n, and we want to establish this for T(n). We do this by writing the recurrence for T(n) and plugging in the inequality T(n/2) ≤ c(n/2) log(n/2). We then simplify the resulting expression by noticing that log (n/2) = (log n) - 1. This establishes the bound we want for T(n), assuming it holds for smaller values m < n, and thus it completes the induction argument. | ||
| + | |||
| + | ==== 5.1.3 Comments ==== | ||
| + | |||
| + | An interesting section that introduces a novel concept: recurrence relations. I have known for a while now that Mergesort is a " | ||
| + | |||
| + | That said, I feel like, in general, recurrence relations look like they are difficult to understand or follow. I also felt like most of the class was not immediately comfortable with the idea and the notations. I guess this is again one of those things that are difficult to get used to, but once you do get used to them, they' | ||
| + | |||
| + | In terms of how interesting this section was to me, I'd say about an 8/10. | ||
| + | |||
| + | ===== 5.2 Further Recurrence Relations ===== | ||
| + | |||
| + | We now consider a subset of recurrence relations that generalizes the relation in (5.1), and show how to solve the recurrences in this subset. This more general class of algorithms is obtained by considering divide and conquer algorithms that create recursive calls on q subproblems of size n/2 each and then combine the results in O(n) time. Thus we have: | ||
| + | |||
| + | (5.3) For some constant c, T(n) ≤ qT(n/2) + cn (where n > 2), and T(2) ≤ c. | ||
| + | |||
| + | ==== 5.2.1 The Case of q > 2 Subproblems ==== | ||
| + | |||
| + | We can unroll the recurrence for the case where q > 2 as follows: | ||
| + | |||
| + | At the first level of recursion, we have a single problem of size n, which takes time at most cn plus the time spent in all subsequent recursive calls. At the next level, we have q problems, each of size n/2. Each of these takes time at most cn/2, for a total of at most (q/2)cn, again plus the time in subsequent recursive calls. The next level yields q^2 problems of size n/4 each, for a total time of (q^2/4)cn. Since q > 2, we see that the total work per level is increasing as we proceed through the recursion. At an arbitrary level j, we have q^j distinct instances, each of size n/2^j. Thus the total work performed at level j is q^j(cn/2^j) = (q/2)^j cn. | ||
| + | |||
| + | There are log n levels of recursion, and the total amount of work performed is the sum over all the levels. This sum turns out to be a geometric series, and solving the series yield a running time of O(n^log q) (see page 216 of the textbook). | ||
| + | |||
| + | ==== 5.2.2 The Case of One Subproblem ==== | ||
| + | |||
| + | We can unroll the recurrence for the case of one subproblem as follows: | ||
| + | |||
| + | At the first level of recursion, we have a single problem of size n, which takes time at most cn plus the time spent in all subsequent recursive calls. The next level has one problem of size n/2, which contributes cn/2, and the level after that has one problem of size n/4, which contributes cn/4. As such, the total work per level when q = 1 is actually decreasing as we proceed through the recursion. At an arbitrary level j, we still have just one instance; it has size n/2^j and contributes cn/2^j to the running time. As was the case before, there are log n levels of recursion, and the total amount of work performed is again a geometric series, solving which yields a running time of O(n) (see page 218 of the textbook). | ||
| + | |||
| + | ==== 5.2.3 Comments ==== | ||
| + | |||
| + | The case with just one subproblem was relatively easier to understand and follow. I personally struggled initially in class when we introduced the idea of q > 2 subproblems. The process seemed straightforward, | ||
| + | |||
| + | Other than that, this was a straightforward(ish) section that was just an extension of the previous section--and as such, not super interesting. 6.5ish/10. | ||
| + | |||
| + | ===== 5.3 Counting Inversions ===== | ||
| + | |||
| + | We are given a sequence of n distinct numbers a_1, ..., a_n. We want to define a measure that tells us how far this list is from being in ascending order--the value of the measure should be 0 if a_l < a_2 < . . . < a_n, and should increase as the numbers become more scrambled. | ||
| + | |||
| + | We could quantify this notion by counting the number of inversions. We say that two indices i < j form an inversion if a_i > a_j, i.e. if the two elements a_i and a_j are "out of order." | ||
| + | |||
| + | ==== 5.3.1 Designing and Analyzing the Algorithm ==== | ||
| + | |||
| + | The simplest algorithm to solve this problem could look at every pair of numbers (a_i, a_j) and determine whether they constitute an inversion. This would take O(n^2) time--as such, the algorithm is already pretty efficient. We can, however, seek an even more efficient solution to this problem. | ||
| + | |||
| + | Consider the following algorithm: | ||
| + | |||
| + | Merge-and-Count(A, | ||
| + | | ||
| + | | ||
| + | While both lists are nonempty: | ||
| + | Let a_i and b_j be the elements pointed to by the Current pointer | ||
| + | Append the smaller of these two to the output list | ||
| + | If b_j is the smaller element then | ||
| + | | ||
| + | Endif | ||
| + | Advance the Current pointer in the list from which the smaller element was selected | ||
| + | | ||
| + | Once one list is empty, append the remainder of the other list to the output | ||
| + | | ||
| + | |||
| + | Each iteration of the While loop takes constant time, and in each iteration we add some element to the output that will never be seen again. Thus the number of iterations can be at most the sum of the initial lengths of A and B, and so the total running time is O(n). | ||
| + | |||
| + | |||
| + | We use this Merge-and-Count routine in a recursive procedure that simultaneously sorts and counts the number of inversions in a list L. | ||
| + | |||
| + | Sort-and-Count(L): | ||
| + | If the list has one element then | ||
| + | there are no inversions | ||
| + | Else | ||
| + | Divide the list into two halves: | ||
| + | A contains the first [n/2] elements | ||
| + | B contains the remaining [n/2] elements | ||
| + | (r_A, A) = Sort-and-Count(A) | ||
| + | (r_B, B) = Sort-and-Count(B) | ||
| + | (r, L) = Merge-and-Count(A, | ||
| + | Endif | ||
| + | | ||
| + | |||
| + | Since our Merge-and-Count procedure takes O(n) time, the rimming time T(n) of the full Sort-and-Count procedure satisfies the recurrence (5.1). Therefore, the Sort-and-Count algorithm correctly sorts the input list and counts the number of inversions, and runs in O(n log n) time for a list with n elements. | ||
| + | |||
| + | ==== 5.3.2 Comments ==== | ||
| + | |||
| + | I feel like this was one of the sections where class discussion was very important. Just reading the algorithm alone did not make much sense to mean, and I struggled understanding the key reason why the algorithm returns a sorted list along with the count. It did not seem necessary to me when I was reading this section before--however, | ||
