Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
courses:cs211:winter2018:journals:donohuem:chapter2 [2018/01/29 23:35] donohuemcourses:cs211:winter2018:journals:donohuem:chapter2 [2018/01/30 00:00] (current) donohuem
Line 27: Line 27:
 **Linear Time:** Two example algorithms given that run in O(n) time are //Computing the Maximum// and //Merging Two Sorted Lists//. For the CtM algorithm, we know it runs in linear time because we must process the entire input (e.g. a list) and perform a constant action on it (ensuring that the ith value is or is not the maximum). The MTSL algorithm has a less intuitive linear run time. We take two elements (using a pointer), one from each list, append the smaller element to a new list, and advance the pointer to the next element in the list from which the smaller element was taken. Since there can be at most 2n iterations using this algorithm, the run time is O(n). **Linear Time:** Two example algorithms given that run in O(n) time are //Computing the Maximum// and //Merging Two Sorted Lists//. For the CtM algorithm, we know it runs in linear time because we must process the entire input (e.g. a list) and perform a constant action on it (ensuring that the ith value is or is not the maximum). The MTSL algorithm has a less intuitive linear run time. We take two elements (using a pointer), one from each list, append the smaller element to a new list, and advance the pointer to the next element in the list from which the smaller element was taken. Since there can be at most 2n iterations using this algorithm, the run time is O(n).
  
-**Linearithmic Time:** No explicit algorithm analysis is given, but we are likely to encounter linearithmic run times when sorting. An example algorithm is Mergesort.+**Linearithmic Time:** No explicit algorithm analysis is given, but we are likely to encounter linearithmic run times when sorting or divide and conquer. An example algorithm is Mergesort.
  
 **Quadratic Time:** Two examples of quadratic time are finding a pair of points that are closest together in a plane and nested loops. In the Closest-Pair Problem, we iterate through each input point (x, y), then iterate through each other input point (a, b), then perform a constant action. Thus, the resulting run time is O(n^2). **Quadratic Time:** Two examples of quadratic time are finding a pair of points that are closest together in a plane and nested loops. In the Closest-Pair Problem, we iterate through each input point (x, y), then iterate through each other input point (a, b), then perform a constant action. Thus, the resulting run time is O(n^2).
Line 38: Line 38:
  
 **Sublinear Time:** An example is Binary Search, which utilizes three conditionals (if q == p, q < p, q > p) and recursion to make a run time of O(logn). **Sublinear Time:** An example is Binary Search, which utilizes three conditionals (if q == p, q < p, q > p) and recursion to make a run time of O(logn).
 +
 +Overall, the readability of this section was 7/10.
 +
 +===== Section 2.5 A More Complex Data Structure: Priority Queues =====
 +The motivation for a priority queue is the problem of managing real-time events but processes do not arrive in the order of the urgency in which they should be done. We use a //Heap// to implement a priority queue. A heap is a balanced binary tree where each node's key is greater than or equal to its parent node. We represent a heap using an array, where the index of the left child of a parent is 2i and the right child is 2i + 1. Two critical operations of heaps are Heapify-Up and Heapify-Down. When adding an element that is too small to the end of the heap, Heapify-Up is useful. We check if the added node matches the criterion for the heap; if it doesn't we swap it with its parent and recursively call the function again until the heap is repaired. Heapify-Up has a run time of O(logn). When deleting an element that is too large, we use Heapify-Down. After ensuring we are not at the frontier of the tree, we swap a given node with the smaller of its two children until the heap is repaired. Heapify-Down has a run time of O(logn). Using these two operations we can efficiently implement a priority queue. Overall the readability of this section is 7/10. 
courses/cs211/winter2018/journals/donohuem/chapter2.1517268918.txt.gz · Last modified: by donohuem
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0