This is an old revision of the document!
Table of Contents
Chapter 2 - Basics of Algorithm Analysis
Section 2.1 - Computational Tractability
This section begins by outlining some goals in how we will diagnose themes and design principles surrounding algorithms. However, the focus then shifts to the concept of efficiency. As 2.1 progresses, the author takes logical steps towards a working definition of efficiency. We desire it in our algorithms, and we need a careful definition of it such that we can achieve it in our problem-solving as we move forward.
As a Math major, I like that the focus shifted to the mathematics behind efficiency. It truly drew me to the subject matter. The author comes to a concrete definition of efficiency focusing on the idea of a polynomial runtime. This concept was a little confusing in class but is much clearer now after reading about how it works. I found it striking that such a qualitative idea such as algorithm efficiency could be reduced down to a simple quantitative checking process. I would rate the readability of this section at 10/10.
Section 2.2 - Asymptotic Order of Growth
The section continues from Computational Tractability, where we identified a definition of efficiency based on the worst-case runtime performance of an algorithm. Here, the author continues on by looking for a proper way to bound a function f(n) which expresses the runtime of an algorithm. The author examines the different ways to express bounds, through big-O, Omega, and Theta notation. The exact definitions behind these made more sense after reading back through them after covering them in class. In class, I was a little hazy between all of the variables, but they make more sense now. I found all of the explanations behind these to be fairly intuitive with one exception; I struggled to follow the converging limit of Theta on page 38. I have a solid grasp on what it means, but I am not fully confident with the material.
The author then walks through the properties of these asymptotic bounds on runtime where clear-cut, easily understandable, proofs. After these, the author covers the bounds of some common functions, including polynomials logarithms, and exponentials. These were fairly straightforward. I would rate the readability of this section at 7/10; some of the material was hard to follow and I found it less interesting than 2.1.
Section 2.3 - Implementing the Stable Matching Algorithm Using Lists and Arrays
This section begins by walking through the reason behind why we want to examine data structures to implement the Stable Matching Algorithm. We need to walk through these implementations in order to ensure that we are able to use this algorithm in the real world with the same runtime O(n^2) which we arrived at in earlier sections. After this brief introduction, the author walks through the relative benefits and costs of using lists and arrays, a nice refresher on their properties. And after this, the chapter jumps into the Stable Matching Algorithm itself. There are four steps within the algorithm which the author determines we must have running in constant time in order for the algorithm to run under O(n^2). The author walks through the steps logically and clearly, and arrives at the conclusion desired, that the G-S algorithm runs in O(n^2) time.
I found this section very straightforward, and because I was a little sick in class on Friday, I found some of the material from class difficult to follow. But after reading the section, I feel much more comfortable with exactly how and why we made the choices to use specific data structures. I also like the way that the author counters the problem encountered in (4.) on pages 46-47. I thought it was a clever solution to a confusing problem. In terms of further curiosity, I would have liked to see a note from the author on other data structure implementations (if there are any) for the algorithm. For instance, in class, my row discussed using a linked list to maintain the men's preference - allowing a simple chaining operation to figure out who the man should propose to next. Either way, I would rate the readability of this section at 10/10.
Section 2.4 - A Survey of Common Running Times
The section begins by laying out what is to be described up ahead: a walkthrough of common runtime bounds and some of the approaches which lead to them. I found it helpful that the author walked through the two different ways to approach runtime bounds: the bound which we would hope to achieve, and then the runtime of the brute-force search. It helps to conceptualize these problems and their respective runtimes.
The author begins with the most understandable runtime: linear. This section was very clear, demonstrating linear examples in both finding the maximum of a list and merging two sorted lists. From here, the text moves on to O(n log n) runtime. I found the conceptual basis for when this occurs very helpful: “any algorithm that splits its input into two equal-sized pieces, solves each piece recursively, and then combines the two solutions in linear time.”
The text then offers a taste of quadratic runtime, citing examples such as finding the largest distance between (x,y) coordinates and the commonality of finding this in the case of nested loops.
The next few sections: cubic, O(n^k), and “Beyond Polynomial Time” were slightly less straight-forward. It took a little more time to digest these ideas, as we did not cover them as extensively in class. And the author ends with sublinear time, a refreshing conclusion as an easy-to-understand idea. I would rate the readability of this section as 6/10 - I found it very dense at times.
Section 2.4 - A More Complex Data Structure: Priority Queues
Again, the author begins by laying out the goals for the section - which I truly appreciate as the reader. And here, the goal is to lay out a widely used data structure, one that can achieve superior runtime by capitalizing on implementation details.
The author then walks through the problem: that we want to be able to add, delete, and select elements quickly when an algorithm calls for it. The author then walks through exactly what a priority queue is and shows that, by using one, a linear sequence of priority queue operations can be used to sort n numbers.
We then dive into heaps, as a data structure used to implement heaps, which turns out to be extraordinarily similar to a binary search tree represented as an array. The author then walks through the heap operations, such as adding, deleting, and the respective algorithms necessary to implement these operations: Heapify-Down and Heapify-Up. We covered both of these in class, and after reading the section, I feel that I have a very strong understanding the logic and runtime of each of them. The author then concludes the section by walking through the five primary operations surrounding heaps implementing priority queues: StartHeap, Insert, FindMin, Delete, and ExtractMin, citing the runtimes for each using the details from earlier in the section.
I would rate the readability of this section at 9/10. It flowed nicely and was a nice refresher from what we covered in class.
