Section 2.3: Implementing the Stable Matching Algorithm using Lists and Arrays
The remainder of chapter two reviewed array and lists implementations of the stable matching problem, common runtimes and priority queues and heaps. With an array, we can keep a list of n elements, where A[i] is the ith element in the list. We can access the ith element in O(1) time, we can check for the presence of an element by iterating through the array in O(n) time and if the elements are sorted we can do this in O(Logn) time with binary search. An array does not help us maintain a list that changes over time, as in the stable matching problem, so we can also consider using a linked list our doubly linked list, which is traversable in both directs and allows for deletion and insertion of elements. We can convert from an array to a list, and vice versa, in O(n) time.
Section 2.4: A Survey of Common Running Times
Linear running time is at most a constant factor times the size of the input—an algorithm running at O(n) time will process the input in a single pass, doing constant work per element (p. 48). O(n log n) time is the running time of any algorithm that splits its input into two equal-sized pieces, solves each piece recursively, then combines the two solutions in linear time (ex. Mergesort algorithm p. 50). Quadratic time performs a search over all pairs of input items and spends constant time per pair. With nested loops, we can have a algorithm that consists of a loop with linear iterations, and each iteration of a loop also initiates an internal/nested loop that also takes O(n) time. Multiplying the two = quadratic time (p. 51-2). Exponential time is the running time of any constant k when we search over all subsets of size k (ex. Algorithm p. 53)
Finally, the book discusses priority queues, which are structures that maintain a set of elements, each of which is paired with a key that indicate their respective elements’ priorities. A sequence of O(n) priority queue operations can be used to sort a set of n numbers (p. 58). A heap is a structure that functions similarly, with a key corresponding to an element and the smallest key at the root of the heap. Algorithms for heapify up, which lets us add new elements to the heap in O(logn) time, and heapify down, which lets us delete elements, on pages 61-2. We can use heaps to implement priority queues (p. 64).
A lot of the questions I had about distinguishing or identifying the running times of algorithms and parts of algorithms were answered in these sections, and heaps and priority queues were easy to follow because we worked with them in CS112. I liked how we stepped through the heapify-up and heapify-down processes in class because it made me think through each one step by step, where in the book it’s easy to start skimming through those explanations. I give this section a 10 for readability—the diagrams and written-out algorithms were helpful in explaining concepts.