Differences
This shows you the differences between two versions of the page.
| Next revision | Previous revision | ||
| courses:cs211:winter2018:journals:thetfordt:chapter2 [2018/01/14 18:35] – created thetfordt | courses:cs211:winter2018:journals:thetfordt:chapter2 [2018/01/29 20:32] (current) – thetfordt | ||
|---|---|---|---|
| Line 2: | Line 2: | ||
| ===== Section 2.1 - Computational Tractability ===== | ===== Section 2.1 - Computational Tractability ===== | ||
| + | |||
| + | This section begins by outlining some goals in how we will diagnose themes and design principles surrounding algorithms. However, the focus then shifts to the concept of efficiency. As 2.1 progresses, the author takes logical steps towards a working definition of efficiency. We desire it in our algorithms, and we need a careful definition of it such that we can achieve it in our problem-solving as we move forward. | ||
| + | |||
| + | As a Math major, I like that the focus shifted to the mathematics behind efficiency. It truly drew me to the subject matter. The author comes to a concrete definition of efficiency focusing on the idea of a polynomial runtime. This concept was a little confusing in class but is much clearer now after reading about how it works. I found it striking that such a qualitative idea such as algorithm efficiency could be reduced down to a simple quantitative checking process. I would rate the readability of this section at 10/10. | ||
| ===== Section 2.2 - Asymptotic Order of Growth ===== | ===== Section 2.2 - Asymptotic Order of Growth ===== | ||
| + | |||
| + | The section continues from Computational Tractability, | ||
| + | |||
| + | The author then walks through the properties of these asymptotic bounds on runtime where clear-cut, easily understandable, | ||
| + | |||
| + | |||
| + | ===== Section 2.3 - Implementing the Stable Matching Algorithm Using Lists and Arrays ===== | ||
| + | |||
| + | This section begins by walking through the reason behind why we want to examine data structures to implement the Stable Matching Algorithm. We need to walk through these implementations in order to ensure that we are able to use this algorithm in the real world with the same runtime O(n^2) which we arrived at in earlier sections. After this brief introduction, | ||
| + | |||
| + | I found this section very straightforward, | ||
| + | |||
| + | |||
| + | ===== Section 2.4 - A Survey of Common Running Times ===== | ||
| + | |||
| + | The section begins by laying out what is to be described up ahead: a walkthrough of common runtime bounds and some of the approaches which lead to them. I found it helpful that the author walked through the two different ways to approach runtime bounds: the bound which we would hope to achieve, and then the runtime of the brute-force search. It helps to conceptualize these problems and their respective runtimes. | ||
| + | |||
| + | The author begins with the most understandable runtime: linear. This section was very clear, demonstrating linear examples in both finding the maximum of a list and merging two sorted lists. From here, the text moves on to O(n log n) runtime. I found the conceptual basis for when this occurs very helpful: "any algorithm that splits its input into two equal-sized pieces, solves each piece recursively, | ||
| + | |||
| + | The text then offers a taste of quadratic runtime, citing examples such as finding the largest distance between (x,y) coordinates and the commonality of finding this in the case of nested loops. | ||
| + | |||
| + | The next few sections: cubic, O(n^k), and " | ||
| + | |||
| + | |||
| + | ===== Section 2.4 - A More Complex Data Structure: Priority Queues ===== | ||
| + | |||
| + | Again, the author begins by laying out the goals for the section - which I truly appreciate as the reader. And here, the goal is to lay out a widely used data structure, one that can achieve superior runtime by capitalizing on implementation details. | ||
| + | |||
| + | The author then walks through the problem: that we want to be able to add, delete, and select elements quickly when an algorithm calls for it. The author then walks through exactly what a priority queue is and shows that, by using one, a linear sequence of priority queue operations can be used to sort n numbers. | ||
| + | |||
| + | We then dive into heaps, as a data structure used to implement priority queues, which turns out to be extraordinarily similar to a binary search tree represented as an array. The author then walks through the heap operations, such as adding, deleting, and the respective algorithms necessary to implement these operations: Heapify-Down and Heapify-Up. | ||
| + | |||
| + | The Heapify-Up algorithm runs in O(log n) time. This algorithm adds an element to the end of a heap, determines if the element needs to be switched with its parent and if so, executes this swap and uses a recursive call. | ||
| + | |||
| + | The Heapify-Down algorithm also runs in O(log n) time. This algorithm is used to delete an element from a heap while also maintaining the structure of the heap. It deletes the element at a position i, and then promptly places the element at the end of the heap in the deleted position. It then takes this new swapped element, determines which of its children (if it has children) is lower, and swaps with that child. After this, it just executes a recursive call, ultimately resulting in a deleted element and a proper heap structure. | ||
| + | |||
| + | We covered both of these in class, and after reading the section, I feel that I have a very strong understanding the logic and runtime of each of them. The author then concludes the section by walking through the five primary operations surrounding heaps implementing priority queues: StartHeap, Insert, FindMin, Delete, and ExtractMin, citing the runtimes for each using the details from earlier in the section. | ||
| + | |||
| + | I would rate the readability of this section at 9/10. It flowed nicely and was a nice refresher from what we covered in class. | ||
| + | |||
