Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision |
| courses:cs211:winter2018:journals:mccaffreyk:home:5 [2018/03/11 22:22] – mccaffreyk | courses:cs211:winter2018:journals:mccaffreyk:home:5 [2018/03/11 23:58] (current) – mccaffreyk |
|---|
| |
| Now, we look at cases where the recursive tree is not binary. First, we assume that the size of the input is still /2 each time. However, now each parent node has 3 branches instead of 2. This will produce larger time increase as levels are added. Since 3>2, the efficiency in this situation is greater than linear time. However, it is still quadratic. Overall, this change is equivalent to O(n^log2(q)), meaning any recursive algorithms with more than 2 shoots are bounded by this time. We must note that **q means number of children per parent in the recursive relation.** Of course, we can apply partial substitution to this just as we could for the lower values of q. Another case is where q=1. That is, for each level in the recurrence relation, there is only one computation occurring. Each computation runs in n/2l time where l is the level number. Overall, the total amount of time required as l approaches infinity is 2cn=O(n). The interesting thing about this is that the amount of work taking place actually decreases as the level number increases. In contrast, when n>2 it increases and when n=2 it stays the same(for n=2, n work is being done per level). Some recurrence relations will also have run times that are "geometric sums". For instance, T(n)<=2T(n/2) + cn^2 decreases in growth per level very quickly in w way that cannot be computed as a fixed sum. Some parts of this chapter were more clear but others (such as the geometric algorithm) were still very confusing. I would rate it at 6/10. | Now, we look at cases where the recursive tree is not binary. First, we assume that the size of the input is still /2 each time. However, now each parent node has 3 branches instead of 2. This will produce larger time increase as levels are added. Since 3>2, the efficiency in this situation is greater than linear time. However, it is still quadratic. Overall, this change is equivalent to O(n^log2(q)), meaning any recursive algorithms with more than 2 shoots are bounded by this time. We must note that **q means number of children per parent in the recursive relation.** Of course, we can apply partial substitution to this just as we could for the lower values of q. Another case is where q=1. That is, for each level in the recurrence relation, there is only one computation occurring. Each computation runs in n/2l time where l is the level number. Overall, the total amount of time required as l approaches infinity is 2cn=O(n). The interesting thing about this is that the amount of work taking place actually decreases as the level number increases. In contrast, when n>2 it increases and when n=2 it stays the same(for n=2, n work is being done per level). Some recurrence relations will also have run times that are "geometric sums". For instance, T(n)<=2T(n/2) + cn^2 decreases in growth per level very quickly in w way that cannot be computed as a fixed sum. Some parts of this chapter were more clear but others (such as the geometric algorithm) were still very confusing. I would rate it at 6/10. |
| | |
| | ==== Section 5.3: Counting Inversions ==== |
| | |
| | Here, we apply the building blocks from other sections to solve several other recurrence related problems. For instance, we want a way to compute the number of inversions required to transform one into another with the same values but in a different order. Complete agreement between two sets means no inversions are required to make them match and complete disagreement means that the maximum number of inversions are needed(n/2). For the algorithm, suppose that we have 2 sets: A and B where A!=B where the components in A and B are the same. We can split set B int two halves a and b recursively as in typical merge sort. Then, we sort them so that they "match" the order in set A. Finally, we merge a and b so that A==B. To do this, we take from the beginning of a and the end of b each time. This allows us to infer how many inversions we will need in only O(n) time. All along, we keep track of the total number of inversions required. This algorithm runs in O(nlogn) time as does merge sort. A brute force search for inversions would require O(n^2) time. |
| | |