Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2018:journals:nasona:chapter2 [2018/01/29 00:45] – [2.4 Common Runtimes] nasonacourses:cs211:winter2018:journals:nasona:chapter2 [2018/01/29 16:20] (current) – [2.4 Common Runtimes] nasona
Line 144: Line 144:
 ===== 2.4 Common Runtimes ===== ===== 2.4 Common Runtimes =====
 ==Summary== ==Summary==
 +Linear time happens when an algorithm spends a constant amount of time on each object. Quadratic time arises when nested loops are involved; both loops are O(n) so you multiply the runtimes together to get O(n^2). O(nlogn) comes about when typically in a divide and conquer situation: the algorithm splits the input into two pieces, solves each piece recursively, then merges the results together into a single output. O(logn) happens when an algorithm divides the active portion of the possible input in half on each iteration. O(n^k) occurs when for any constant k when we search over all subsets of size k. O(2^n) is the runtime of a natural brute-force search algorithm, and O(n!) comes about when the algorithm operates on each possible permutation of inputs.
 +
 ==Notes== ==Notes==
 Most problems have a natural “search space”. A unifying theme in algorithm design is the search for algorithms whose performance is more efficient than a brute-force enumeration of the search space. When we get a new problem, we have to think about two different bounds: the runtime we hope to achieve and the size of the problem’s natural search space (the runtime of a brute-force search algorithm). Most problems have a natural “search space”. A unifying theme in algorithm design is the search for algorithms whose performance is more efficient than a brute-force enumeration of the search space. When we get a new problem, we have to think about two different bounds: the runtime we hope to achieve and the size of the problem’s natural search space (the runtime of a brute-force search algorithm).
Line 189: Line 191:
  
 ==Questions== ==Questions==
 +  * When depicting exponential runtime do you want us to have a specific base like O(2^n) or would a general O(n^k) suffice?
 +  * Are there any instances in which O(n^2) arises without the presence of nested loops?
 ==Additional Information== ==Additional Information==
-  * Things that you understand better after reading it again +In class, we didn't really talk about O(2^n) or O(n!) in too much depth. It was really interesting and helpful to see that in the reading. Until doing the reading, I wasn't quite sure why O(2^n) was such a common or important runtime, but now it is obvious why. It is the natural runtime of a brute-force search algorithm, which we try to improve upon in all of the algorithms that we create in this class. Moreover, the way by which a factorial runtime comes about was more explicitly explained in the reading than in class. Although, we would hope to never have an algorithm that runs this efficiently, it is still important to know how it comes about. Factorial runtime comes about as the number of ways to match up m items with n other items and the search space consists of all ways to arrange n items. 
-  * Readability+ 
 +This section was a 9 out of ten in readability because the runtimes were explained very clearly. The only thing that got a little confusing was when they explained some of the examples. However, overall, the language was very precise and the common runtimes were explained in great detail.
 ===== 2.5 Priority Queues ===== ===== 2.5 Priority Queues =====
 ==Summary== ==Summary==
 +The priority queue consists of elements that have a priority value, also know as a key, and each time we select an element from the set, we take the element with the highest priority. We want to be able to add, delete, and select in O(logn) time. The heap is the best data structure to implement the priority queue. A heap is a balanced binary tree: the tree will have a root, and each node can have up to two children, a left and a right child. Elements in a heap are said to be in heap order if the key of any element is at least as large as the key of the element at it parent node in the tree; this is known as heap order. We will use an array to implement the heap; an element in the array with index i has a left child at position 2i and a right child at position i. When adding an element, we must employ the help of an algorithm called Heapify-up in which you add the new element at the end of the current elements in the list and compare it to its parent and swap them if the child's priority is higher. It is called recursively, until it gets to its correct spot. This runs in O(logn) time. When deleting way  need to employ the help of an algorithm called Heapify-down in which you delete the item from the given position and then shift the items accordingly by identifying the highest priority child. This is also recursive and executes in O(logn). The heap combines the benefits of a sorted array and list. 
 +
 +
 +
 ==Notes== ==Notes==
 Main goal in the book is to seek algorithms that improve qualitatively on brute-force search. We use polynomial-time solvability as the concrete formulation of this. Once we have an efficient algorithm to solve a problem, it can be possible to find improvements by being mindful of the implementation, and by possibly using more complex data structures. The priority queue is one of the most broadly useful sophisticated data structures. Main goal in the book is to seek algorithms that improve qualitatively on brute-force search. We use polynomial-time solvability as the concrete formulation of this. Once we have an efficient algorithm to solve a problem, it can be possible to find improvements by being mindful of the implementation, and by possibly using more complex data structures. The priority queue is one of the most broadly useful sophisticated data structures.
Line 268: Line 277:
  
 ==Questions== ==Questions==
 +  * Should the heap methods Delete(H, i) be used only in the context of ExtractMin(H) when a heap is implementing a priority queue because wouldn't extracting any element besides that with the highest priority violate the essence of a priority queue?
 +  * I'm pretty sure the answer is yes, but I'm just double checking? Can a priority queue have multiple elements with the same priority? If so, how does the heap implemented queue decide which element to move to the root when the minimum is extracted when both of the children have the same priority? I'm pretty sure this would have to be dealt with in the part of Heapify-down that figures out the minimum of the two children of a deleted item.
 +
 ==Additional Information== ==Additional Information==
-  * Things that you understand better after reading it again +After going to class and having Professor Sprinkle really break down the components of Heapify-down, what exactly was happening in that algorithm became way more clear. She really emphasize that the three different if/elif statements were for the cases of how many children a node had. If 2i > n then meant that an element had no children. Else if 2i < n then meant that an element had two children. Finally, Else if 2i = n then meant that an element had one child. The reason for having Heapify-down also became more clear after going to class. It is now apparent that Heapify-down is there to fill the hole that deleting an element results in whilst maintaining the heap order. 
-  * Readability+ 
 +This section's readability was an 8 out of 10. The progression from the definitions of priority queues and heaps to the ultimate explanation was very precise and natural. The only reason that it wasn't a 10 is because it took me a while to fully grasp the concepts of Heapify-up and Heapify-down. Ultimately, the section used very clear language and precise explanations of priority queues and heaps, and it included a lot of definitions and explanations of properties of different data structures, which were super helpful in going back over my notes.
courses/cs211/winter2018/journals/nasona/chapter2.1517186734.txt.gz · Last modified: by nasona
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0