Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2018:journals:lesherr:home:chapter2 [2018/01/28 19:09] – [Section 4: A Survey of Common Running Times] lesherrcourses:cs211:winter2018:journals:lesherr:home:chapter2 [2018/01/28 19:23] (current) lesherr
Line 1: Line 1:
-====== Chapter 2 ======+====== Chapter 2: Basics of Algorithm Analysis ======
 ===== Section 1: Computational Tractability ===== ===== Section 1: Computational Tractability =====
  
Line 13: Line 13:
 ===== Section 4: A Survey of Common Running Times ===== ===== Section 4: A Survey of Common Running Times =====
 When analyzing the run times of algorithms, there are styles of analysis the appear frequently in lots of algorithms, and produce the same run-time bounds in every instance. In analyzing a particular algorithm's run time, we often categorize the run-time into four main groups: sublinear, linear, polynomial, and superpolynomial. Sublinear algorithms run in less than O(n), and are most commonly either constant or logarithmic. An example of a logarithmic algorithm is a binary search algorithm. Linear algorithms run in O(n) time, and have a running time that is at most "a constant factor times the size of the input". This means that as the input size scales, the run-time scales proportionally in a 1 to 1 ratio. Some common algorithms that run in linear time are computing the maximum of a list of n numbers, and merging two sorted lists. A common run time that sits in between linear and polynomial times is O(nlogn). This run-time is very common for sorting algorithms, specifically Mergesort and HeapSort.  Polynomial algorithms run in O(n^k), and are common when looking for all subsets of size k from a list. They also emerge naturally when an algorithm utilizes nested loops that each run in O(n). Superpolynomial algorithms run in times larger of O(n^k), and example of an algorithm with such a run-time is one that finds all subsets of a list. Another example is from the stable matching problem, which has n! possible perfect matchings between n men and n women, or when finding all the possible ways to arrange a set of n items. This section was very descriptive in describing the types of algorithms that fit within the particular run-time categories.  I would give it a 10/10. When analyzing the run times of algorithms, there are styles of analysis the appear frequently in lots of algorithms, and produce the same run-time bounds in every instance. In analyzing a particular algorithm's run time, we often categorize the run-time into four main groups: sublinear, linear, polynomial, and superpolynomial. Sublinear algorithms run in less than O(n), and are most commonly either constant or logarithmic. An example of a logarithmic algorithm is a binary search algorithm. Linear algorithms run in O(n) time, and have a running time that is at most "a constant factor times the size of the input". This means that as the input size scales, the run-time scales proportionally in a 1 to 1 ratio. Some common algorithms that run in linear time are computing the maximum of a list of n numbers, and merging two sorted lists. A common run time that sits in between linear and polynomial times is O(nlogn). This run-time is very common for sorting algorithms, specifically Mergesort and HeapSort.  Polynomial algorithms run in O(n^k), and are common when looking for all subsets of size k from a list. They also emerge naturally when an algorithm utilizes nested loops that each run in O(n). Superpolynomial algorithms run in times larger of O(n^k), and example of an algorithm with such a run-time is one that finds all subsets of a list. Another example is from the stable matching problem, which has n! possible perfect matchings between n men and n women, or when finding all the possible ways to arrange a set of n items. This section was very descriptive in describing the types of algorithms that fit within the particular run-time categories.  I would give it a 10/10.
-===== Section 5 ===== +===== Section 5: A More Complex Data Structure: Priority Queues ===== 
 +"A priority queue is a data structure that maintains a set of elements S, where each element v in S has an associated value key(v) that denotes the priority of element v." Elements that have smaller keys represent elements that have higher priorities. Priority queues support addition, deletion, and selecting the element with the smallest key (i.e. highest priority). One main usage for priority keys is sorting a list of items based on their value or "priority" so that items with the greatest priority are executed and completed first. The priority queue operations add and delete run in O(logn) time, so in all a priority queue can sort n items in O(nlogn). When implementing a Priority Queue, we use a heap. A heap is a balanced binary tree, that has a main root and where each node's children's key is at least as large as it's parents. When implementing the heap, we use an array structure, where each parent is located at position i, and its left child is located at position 2i, and its right child is located at position 2i+1. The length of the heap can be calculated using length(h), and access to each node is constant using the array's random access. Identifying the minimal key in the heap takes O(1), since the smallest key is located at the root. When adding an element to the heap, we first add it to the very end of the array, and then use the algorithm Heapify-up to move the element to its proper position in the heap to maintain the heap properties. When we delete an item from the heap, we switch the last element in the array with the empty space where the deleted element was, and then use either Heapify-up or Heapify-down to move the element to the appropriate position to maintain the heap properties.  Reading this chapter after learning about Heaps and class made the chapter very easy to read and understand.  It helped to reinforce the concepts discussed in class, and therefore made understanding process very simple. I'd give the section an 8/10.
  
  
  
courses/cs211/winter2018/journals/lesherr/home/chapter2.1517166568.txt.gz · Last modified: by lesherr
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0