Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| courses:cs211:winter2018:journals:nasona:chapter2 [2018/01/29 01:31] – [2.5 Priority Queues] nasona | courses:cs211:winter2018:journals:nasona:chapter2 [2018/01/29 16:20] (current) – [2.4 Common Runtimes] nasona | ||
|---|---|---|---|
| Line 144: | Line 144: | ||
| ===== 2.4 Common Runtimes ===== | ===== 2.4 Common Runtimes ===== | ||
| ==Summary== | ==Summary== | ||
| + | Linear time happens when an algorithm spends a constant amount of time on each object. Quadratic time arises when nested loops are involved; both loops are O(n) so you multiply the runtimes together to get O(n^2). O(nlogn) comes about when typically in a divide and conquer situation: the algorithm splits the input into two pieces, solves each piece recursively, | ||
| + | |||
| ==Notes== | ==Notes== | ||
| Most problems have a natural “search space”. A unifying theme in algorithm design is the search for algorithms whose performance is more efficient than a brute-force enumeration of the search space. When we get a new problem, we have to think about two different bounds: the runtime we hope to achieve and the size of the problem’s natural search space (the runtime of a brute-force search algorithm). | Most problems have a natural “search space”. A unifying theme in algorithm design is the search for algorithms whose performance is more efficient than a brute-force enumeration of the search space. When we get a new problem, we have to think about two different bounds: the runtime we hope to achieve and the size of the problem’s natural search space (the runtime of a brute-force search algorithm). | ||
| Line 189: | Line 191: | ||
| ==Questions== | ==Questions== | ||
| + | * When depicting exponential runtime do you want us to have a specific base like O(2^n) or would a general O(n^k) suffice? | ||
| + | * Are there any instances in which O(n^2) arises without the presence of nested loops? | ||
| ==Additional Information== | ==Additional Information== | ||
| - | * Things | + | In class, we didn't really talk about O(2^n) or O(n!) in too much depth. It was really interesting and helpful to see that in the reading. Until doing the reading, I wasn't quite sure why O(2^n) was such a common or important runtime, but now it is obvious why. It is the natural runtime of a brute-force search algorithm, which we try to improve upon in all of the algorithms that we create in this class. Moreover, the way by which a factorial runtime comes about was more explicitly explained in the reading than in class. Although, we would hope to never have an algorithm that runs this efficiently, |
| This section was a 9 out of ten in readability because the runtimes were explained very clearly. The only thing that got a little confusing was when they explained some of the examples. However, overall, the language was very precise and the common runtimes were explained in great detail. | This section was a 9 out of ten in readability because the runtimes were explained very clearly. The only thing that got a little confusing was when they explained some of the examples. However, overall, the language was very precise and the common runtimes were explained in great detail. | ||
| ===== 2.5 Priority Queues ===== | ===== 2.5 Priority Queues ===== | ||
| ==Summary== | ==Summary== | ||
| + | The priority queue consists of elements that have a priority value, also know as a key, and each time we select an element from the set, we take the element with the highest priority. We want to be able to add, delete, and select in O(logn) time. The heap is the best data structure to implement the priority queue. A heap is a balanced binary tree: the tree will have a root, and each node can have up to two children, a left and a right child. Elements in a heap are said to be in heap order if the key of any element is at least as large as the key of the element at it parent node in the tree; this is known as heap order. We will use an array to implement the heap; an element in the array with index i has a left child at position 2i and a right child at position i. When adding an element, we must employ the help of an algorithm called Heapify-up in which you add the new element at the end of the current elements in the list and compare it to its parent and swap them if the child' | ||
| + | |||
| + | |||
| + | |||
| ==Notes== | ==Notes== | ||
| Main goal in the book is to seek algorithms that improve qualitatively on brute-force search. We use polynomial-time solvability as the concrete formulation of this. Once we have an efficient algorithm to solve a problem, it can be possible to find improvements by being mindful of the implementation, | Main goal in the book is to seek algorithms that improve qualitatively on brute-force search. We use polynomial-time solvability as the concrete formulation of this. Once we have an efficient algorithm to solve a problem, it can be possible to find improvements by being mindful of the implementation, | ||
| Line 269: | Line 277: | ||
| ==Questions== | ==Questions== | ||
| + | * Should the heap methods Delete(H, i) be used only in the context of ExtractMin(H) when a heap is implementing a priority queue because wouldn' | ||
| + | * I'm pretty sure the answer is yes, but I'm just double checking? Can a priority queue have multiple elements with the same priority? If so, how does the heap implemented queue decide which element to move to the root when the minimum is extracted when both of the children have the same priority? I'm pretty sure this would have to be dealt with in the part of Heapify-down that figures out the minimum of the two children of a deleted item. | ||
| ==Additional Information== | ==Additional Information== | ||
