Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| courses:cs211:winter2018:journals:cohene:home:chapter2 [2018/01/16 02:00] – [2.2: Asymptotic Order of Growth] cohene | courses:cs211:winter2018:journals:cohene:home:chapter2 [2018/01/28 22:37] (current) – Added sections 2.4 and 2.5 cohene | ||
|---|---|---|---|
| Line 3: | Line 3: | ||
| This section reminded me of what we learned in the CSCI 112 course at W&L. This section focused on runtime and memory allocation in terms of efficiency. It was fairly straightforward and easy to read. Reading about runtime for algorithms, it reinforced what I had learned in my previous computer science courses (big O notation). The methodology behind finding efficiency is to start by analyzing worst-case run times. For example, in the stable matching problem, we should consider the search space (which, in this case, is huge- n! possible perfect matchings). | This section reminded me of what we learned in the CSCI 112 course at W&L. This section focused on runtime and memory allocation in terms of efficiency. It was fairly straightforward and easy to read. Reading about runtime for algorithms, it reinforced what I had learned in my previous computer science courses (big O notation). The methodology behind finding efficiency is to start by analyzing worst-case run times. For example, in the stable matching problem, we should consider the search space (which, in this case, is huge- n! possible perfect matchings). | ||
| + | |||
| ===== 2.2: Asymptotic Order of Growth ===== | ===== 2.2: Asymptotic Order of Growth ===== | ||
| + | This section was a lot more difficult to understand than any of the previous sections. On a scale of 1-10, this section was about a 6 for me in regards to how readable it was. This section mainly covered upper and lower bounds for some function, f(n) where T(n) is the worst-case. For upper bounds, T(n) is O(f(n)). For lower bounds, T(n) = Ω (f(n))). If T(n) is in between these two values, than it is considered Θ(f(n))) which means that f(n) is an asymptotically tight bound for T(n). | ||
| + | |||
| + | As I was unable to attend the last class due to illness, this discussion very much confused me. The following discussion, however, did make some sense. These functions have certain properties. Two of these properties includes the transitive property and the sums of functions. This makes sense as these properties apply to any mathematical function, and should therefore apply to any algorithm. Furthermore, | ||
| + | |||
| + | ===== 2.3: Implementing the Stable Matching Algorithm Using Lists and Arrays ===== | ||
| + | This section was very easy to read and understand, and on a scale from 1-10 was most likely a 9 on readability. In my opinion, this section was very important as it covered how one would go about representing various lists of data using array lists and linked lists. This is very important as the two structures are more efficient in different areas. For example, an array is less efficient in maintaining a list that constantly changes size while a linked list is less efficient in finding an element at the ' | ||
| + | |||
| + | ===== 2.4: A Survey of Common Running Times ===== | ||
| + | On a scale of 1-10 this section was very easy to comprehend, and I would therefore give it an 8. When thinking about running time, computer scientists always look toward efficiency. The fastest running time offers the most efficiency, and it is therefore necessary to discuss common running times and how to achieve each type of run times. In general, there are two types of bounds. The first is the run time one hopes to achieve and the second is the size of the problem' | ||
| + | |||
| + | Another common run time is O(n logn). This run time is generally achieved in any algorithm that splits its input into two equal-sized pieces, solving each piece recursively, | ||
| + | |||
| + | When nested loops occur, the run time becomes a polynomial. Two common polynomial run times include quadratic time and cubic time. In general, O(n^k) time occurs for any constant k when one searches over all subsets of size k. This became much clearer after reading it in the textbook, as several examples, such as searching through the nodes on a graph, were given. The most common run times beyond polynomial time are 2^n and n! . It turns out that 2^n unfortunately is the size of the search space for many problems. An example of an algorithm with this run time is a finding an independent set of a maximum size within a graph. n!, however, has a larger run time than 2^n. n! is the number of ways to match n items with n other items. This was made clear as the example of the Stable Matching Problem was given, where n! is the number of ways to match men and women together in that problem. This run time also occurs in problems where the search space consists of all ways to arrange n items in order, for example, a traveling salesman. | ||
| + | |||
| + | The book had a good explanation of sublinear run times. These run times arise when the input can be " | ||
| + | |||
| + | |||
| + | ===== 2.5: A More Complex Data Structure: Priority Queues | ||
| + | I believe that I had a thorough understanding of priority queues from the class lectures, and so this reading simply reinforced what I had learned in this class and previous classes. Achieving a desired run time comes from more than just the implementation of an algorithm. It also has to do with overcoming higher-level obstacles. A priority queue is designed to overcome these higher-level obstacles through its structure. In a priority queue, elements have a priority value (key) and each time an element is selected, the one with the highest priority is selected. In a priority queue, the smaller the key, the higher the priority. This type of data structure can be implemented in the way a computer schedules various processes. Priority queues support addition, deletion, and selection all in O(logn) time, and can efficiently sort a set in O(n logn) time. | ||
| + | |||
| + | Priority queues can be implemented through the use of heaps. Heaps allow us to maintain the elements in the sorted order of the keys while combining the benefits of a sorted array and list. Heaps have the appearance of a balanced binary tree. The tree has a root, which is the highest priority value, and each node has up to two children (a left and right child). The heap is considered in //heap order// if the key of any element is at least as large as its parent key. Heaps can also be represented through an array, H, with various nodes corresponding to various positions: H[1] as the root, a left child node as H[2i], a right child node as H[2i + 1], a parent of H[i/2] and a length(H) representing the number of elements in the heap. | ||
| + | |||
| + | There are several heap operations to support adding and deleting elements. Two algorithms, Heapify-Up and Heapify-Down can be used to support insertion and deletion of elements into the heap (respectively). These algorithms were designed to processes these operations with a O(log n) run time. Heapify-up fixes the heap in O(log n) time and allows for insertion of a new elements into a a heap of existing elements. Heapify-Down fixes the heap in O(log n) time and allows for deletion of an element in a heap of n elements. | ||
| + | |||
| + | There are various operations which can be run on heaps. Insertions, deletions, extracting the minimum, and changing a key all occur in O(log n) time. Initializing a heap does have a linear run time, however, finding the minimum can be achieved in constant time. | ||
| - | This section was a lot more difficult | + | Overall this section was easy to understand, and would score around a 7 on a scale of 1-10 in readability. |
| - | As I was unable to attend the last class due to illness, this discussion very much confused me. The following discussion, however, did make some sense. These functions have certain properties. Two of these properties includes the transitive property and the sums of functions. This makes sense as these properties apply to any mathematical function, and should therefore apply to any algorithm. Furthermore, | + | |
