Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| courses:cs211:winter2018:journals:bairdc:chapter2 [2018/01/29 03:30] – bairdc | courses:cs211:winter2018:journals:bairdc:chapter2 [2018/01/30 04:54] (current) – [2.5 – A More Complex Data Structure: Priority Queues] bairdc | ||
|---|---|---|---|
| Line 28: | Line 28: | ||
| This section was explained very well and I'd give it a 10/10 in both readability and my interest. The only thing I'm still a tiny bit confused about is how the algorithm is still O(n^2) with the Ranking initialization. Is it because this initialization technically isn't part of the algorithm, so you don't count it in the algorithm runtime? | This section was explained very well and I'd give it a 10/10 in both readability and my interest. The only thing I'm still a tiny bit confused about is how the algorithm is still O(n^2) with the Ranking initialization. Is it because this initialization technically isn't part of the algorithm, so you don't count it in the algorithm runtime? | ||
| - | ===== 2.3 – A Survey of Common Running Times ===== | + | ===== 2.4 – A Survey of Common Running Times ===== |
| + | |||
| + | Both bounds on running time and on problem size are important. | ||
| + | |||
| + | Some common linear time algorithms are computing the maximum and merging two sorted lists. In the case of computing the maximum, every element has to be examined, so it must be O(n). For merging two sorted lists, you could mush them together, then run a sorting algorithm, but it would be very inefficient. So, you treat them like two sorted card decks, picking up a card from the deck with the lowest value, then placing it in another list, running in at most 2n iterations, so O(n). O(nlogn) time algorithms happen any time an algorithm splits its input into two equal-sized pieces and solves each piece recursively, | ||
| + | |||
| + | Common bounds that come up frequently: 2^n and n!. 2^n occurs as a running time for search algorithms that consider all subsets. N! arises where the search space is all the ways to arrange n items in order (grows more rapidly than 2^n). | ||
| + | |||
| + | Overall, this section was pretty readable and interesting. I'd give it a 7/10. The one nagging question I have is why the total number of subsets of an n-element set is 2^n. This is probably a very simple question and basic answer, but for some reason I'm having a hard time visualizing it. | ||
| + | |||
| + | ===== 2.5 – A More Complex Data Structure: Priority Queues ===== | ||
| + | |||
| + | The goal with a priority queue is to implement a data structure with n elements so that elements can be added and deleted, and the minimum selected, all in O(logn) time. You could use a list, but after removing the minimum, there needs to be a scan of all elements in O(n) time to find the new minimum. Also you couldn' | ||
| + | |||
| + | Balancing algorithms are shown below: | ||
| + | |||
| + | < | ||
| + | Heapify-down(H, | ||
| + | n = length(H) | ||
| + | if 2i > n then | ||
| + | Terminate with H unchanged | ||
| + | else if 2i < n then | ||
| + | left=2i and right=2i+1 | ||
| + | j be index that minimizes key[H[left]] and key[[H[right]] | ||
| + | else if 2i = n then | ||
| + | j=2i | ||
| + | if key[H[j]] < key[H[i]] then | ||
| + | swap array entries H[i] and H[j] | ||
| + | Heapify-down(H, | ||
| + | </ | ||
| + | |||
| + | < | ||
| + | Heapify-up(H, | ||
| + | if i > 1 then | ||
| + | j=parent(i)=floor(i/ | ||
| + | if key[H[i]] < key[H[j]] then | ||
| + | swap array entries H[i] and H[j] | ||
| + | Heapify-up(H, | ||
| + | </ | ||
| + | |||
| + | This section was very readable and I would give it a 10/10 on readability and interestingness. I have no lingering questions after reading through the section. | ||
