Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2018:journals:lesherr:home:chapter2 [2018/01/28 19:08] – [Section 4] lesherrcourses:cs211:winter2018:journals:lesherr:home:chapter2 [2018/01/28 19:23] (current) lesherr
Line 1: Line 1:
-====== Chapter 2 ======+====== Chapter 2: Basics of Algorithm Analysis ======
 ===== Section 1: Computational Tractability ===== ===== Section 1: Computational Tractability =====
  
Line 12: Line 12:
 When analyzing the asymptotic bounds of an algorithm, you don't actually need to write and run the program, but instead simply analyze the pseudocode implementation of the algorithm in a step by step fashion. For the Gale-Shapley algorithm, we know that the algorithm will terminate in at most n^2 iterations, as there are n men who can each make proposals to at most n women, thus resulting in a total of n^2 proposals. When thinking about how this algorithm will actually be implemented with data structures, the main things that need to be tracked are the set of men and women, their respective individual preference lists, the set of unmatched men, who each man has proposed to, and the current engagements at any point in time. When implementing these structures we need to decide whether it is more efficient to use arrays or lists. Arrays are a set block of memory that stores a specific preset amount of information. The main properties of arrays are that they have random access and can access the ith element in constant time, they can look for an element within the set in linear time, and if the list is sorted they can look for an element in logarithmic time. However, arrays are also created with a fixed size, and therefore are not dynamically sizable and are not as useful for sets of changing size. A linked list is a collection of blocks of memory that are connected to each other through pointers that point to where the next block of memory is located. The main properties of linked lists are that they are dynamically sizable and therefore don't have a fixed size like arrays, they can delete items from the list in constant time, as well as add items in constant time. However, they do not have random access and therefore require linear time to access the ith item. Converting between an array and list structure can be done in linear time. Returning back to the Gale-Shapley algorithm, the strengths of both arrays and lists can be used. When tracking the individual men and women, they can be represented by integers that are used as their 'ids'. The preference lists of men and women can be tracked using 2D arrays, one for men and one for women. The set of unmatched men is best implemented using a list, as it will change over time.  The set of engagements can be tracked using an array, matching the 'ids' of an engaged man and woman. The next woman a man needs to propose to can be tracked by maintaining the integer value of the next woman he needs to propose to on his preference list. Using these data structures, we are able to make sure that every action completed in the pseudocode algorithm is completed in linear time, during at most n^2 iterations of the while loop, and therefore confirms that this algorithm can be implemented in O(n^2) time.  I found this section very helpful in understanding why we need to use lists vs arrays and vice versa in order to complete the necessary actions for this algorithm in order to make each step performed in constant time.  Overall I would give this section a 9/10. When analyzing the asymptotic bounds of an algorithm, you don't actually need to write and run the program, but instead simply analyze the pseudocode implementation of the algorithm in a step by step fashion. For the Gale-Shapley algorithm, we know that the algorithm will terminate in at most n^2 iterations, as there are n men who can each make proposals to at most n women, thus resulting in a total of n^2 proposals. When thinking about how this algorithm will actually be implemented with data structures, the main things that need to be tracked are the set of men and women, their respective individual preference lists, the set of unmatched men, who each man has proposed to, and the current engagements at any point in time. When implementing these structures we need to decide whether it is more efficient to use arrays or lists. Arrays are a set block of memory that stores a specific preset amount of information. The main properties of arrays are that they have random access and can access the ith element in constant time, they can look for an element within the set in linear time, and if the list is sorted they can look for an element in logarithmic time. However, arrays are also created with a fixed size, and therefore are not dynamically sizable and are not as useful for sets of changing size. A linked list is a collection of blocks of memory that are connected to each other through pointers that point to where the next block of memory is located. The main properties of linked lists are that they are dynamically sizable and therefore don't have a fixed size like arrays, they can delete items from the list in constant time, as well as add items in constant time. However, they do not have random access and therefore require linear time to access the ith item. Converting between an array and list structure can be done in linear time. Returning back to the Gale-Shapley algorithm, the strengths of both arrays and lists can be used. When tracking the individual men and women, they can be represented by integers that are used as their 'ids'. The preference lists of men and women can be tracked using 2D arrays, one for men and one for women. The set of unmatched men is best implemented using a list, as it will change over time.  The set of engagements can be tracked using an array, matching the 'ids' of an engaged man and woman. The next woman a man needs to propose to can be tracked by maintaining the integer value of the next woman he needs to propose to on his preference list. Using these data structures, we are able to make sure that every action completed in the pseudocode algorithm is completed in linear time, during at most n^2 iterations of the while loop, and therefore confirms that this algorithm can be implemented in O(n^2) time.  I found this section very helpful in understanding why we need to use lists vs arrays and vice versa in order to complete the necessary actions for this algorithm in order to make each step performed in constant time.  Overall I would give this section a 9/10.
 ===== Section 4: A Survey of Common Running Times ===== ===== Section 4: A Survey of Common Running Times =====
-When analyzing the run times of algorithms, there are styles of analysis the appear frequently in lots of algorithms, and produce the same run-time bounds in every instance. In analyzing a particular algorithm's run time, we often categorize the run-time into four main groups: sublinear, linear, polynomial, and superpolynomial. Sublinear algorithms run in less than O(n), and are most commonly either constant or logarithmic. An example of a logarithmic algorithm is a binary search algorithm. Linear algorithms run in O(n) time, and have a running time that is at most "a constant factor times the size of the input". This means that as the input size scales, the run-time scales proportionally in a 1 to 1 ratio. Some common algorithms that run in linear time are computing the maximum of a list of n numbers, and merging two sorted lists. A common run time that sits in between linear and polynomial times is O(nlogn). This run-time is very common for sorting algorithms, specifically Mergesort and HeapSort.  Polynomial algorithms run in O(n^k), and are common when looking for all subsets of size k from a list. They also emerge naturally when an algorithm utilizes nested loops that each run in O(n). Superpolynomial algorithms run in times larger of O(n^k), and example of an algorithm with such a run-time is one that finds all subsets of a list. Another example is from the stable matching problem, which has n! possible perfect matchings between n men and n women, or when finding all the possible ways to arrange a set of n items. +When analyzing the run times of algorithms, there are styles of analysis the appear frequently in lots of algorithms, and produce the same run-time bounds in every instance. In analyzing a particular algorithm's run time, we often categorize the run-time into four main groups: sublinear, linear, polynomial, and superpolynomial. Sublinear algorithms run in less than O(n), and are most commonly either constant or logarithmic. An example of a logarithmic algorithm is a binary search algorithm. Linear algorithms run in O(n) time, and have a running time that is at most "a constant factor times the size of the input". This means that as the input size scales, the run-time scales proportionally in a 1 to 1 ratio. Some common algorithms that run in linear time are computing the maximum of a list of n numbers, and merging two sorted lists. A common run time that sits in between linear and polynomial times is O(nlogn). This run-time is very common for sorting algorithms, specifically Mergesort and HeapSort.  Polynomial algorithms run in O(n^k), and are common when looking for all subsets of size k from a list. They also emerge naturally when an algorithm utilizes nested loops that each run in O(n). Superpolynomial algorithms run in times larger of O(n^k), and example of an algorithm with such a run-time is one that finds all subsets of a list. Another example is from the stable matching problem, which has n! possible perfect matchings between n men and n women, or when finding all the possible ways to arrange a set of n items. This section was very descriptive in describing the types of algorithms that fit within the particular run-time categories.  I would give it a 10/10
-===== Section 5 ===== +===== Section 5: A More Complex Data Structure: Priority Queues ===== 
 +"A priority queue is a data structure that maintains a set of elements S, where each element v in S has an associated value key(v) that denotes the priority of element v." Elements that have smaller keys represent elements that have higher priorities. Priority queues support addition, deletion, and selecting the element with the smallest key (i.e. highest priority). One main usage for priority keys is sorting a list of items based on their value or "priority" so that items with the greatest priority are executed and completed first. The priority queue operations add and delete run in O(logn) time, so in all a priority queue can sort n items in O(nlogn). When implementing a Priority Queue, we use a heap. A heap is a balanced binary tree, that has a main root and where each node's children's key is at least as large as it's parents. When implementing the heap, we use an array structure, where each parent is located at position i, and its left child is located at position 2i, and its right child is located at position 2i+1. The length of the heap can be calculated using length(h), and access to each node is constant using the array's random access. Identifying the minimal key in the heap takes O(1), since the smallest key is located at the root. When adding an element to the heap, we first add it to the very end of the array, and then use the algorithm Heapify-up to move the element to its proper position in the heap to maintain the heap properties. When we delete an item from the heap, we switch the last element in the array with the empty space where the deleted element was, and then use either Heapify-up or Heapify-down to move the element to the appropriate position to maintain the heap properties.  Reading this chapter after learning about Heaps and class made the chapter very easy to read and understand.  It helped to reinforce the concepts discussed in class, and therefore made understanding process very simple. I'd give the section an 8/10.
  
  
  
courses/cs211/winter2018/journals/lesherr/home/chapter2.1517166519.txt.gz · Last modified: by lesherr
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0