Table of Contents

Chapter 2

2.1: Computational Tractability

Summary of the Section
Notes on the Section
Questions
Discuss anything that makes more sense after reading it again, after it was presented in class (or vice versa)
How Readable the Section was

2.2: Asymptotic Order of Growth

Summary of the section
Notes on the Section
Questions
Discuss anything that makes more sense after reading it again, after it was presented in class (or vice versa)
How readable the section was

2.3 Implementing the Stable Matching Algorithm

Summary

In order to implement the Gale-Shapely algorithm in O(n^2) runtime, we need to consider the data structures we need to use and how they will be manipulated, and, in turn, limit the number of computational steps. We only need two (of the simplest) data structures to implement the Gale-Shapely algorithm in O(n^2): arrays and lists. Linked lists are good for maintaining a dynamically changing set of data. However arrays are not good for sets of data that are constantly changing because resizing results in O(n) runtime. Now, for the implementation, the set of unmatched men will be kept track of as a list because the set is constantly changing as the algorithm is performed. Engagements will be kept track of in 2 arrays of length n. Who the men will propose to will be be tracked in an array of integers, indicating what position on m's list he will propose to next. Finally, the preference lists will be implemented as two 2D arrays. However, the structure of how the data is accessed is different in the women's array than in the men's array in order to maintain O(k) runtime with the operations within the while loop.

Outline of the Gale-Shapley Algorithm Implementation
Arrays vs. Lists
Questions
Additional Information

After discussing the implementation of the algorithm in class with the various data structures, I had a hard time comprehending the women's preference list implementation because it was backwards of the implementation of the men's preference list. However, after reading the text, it became clear that you merely need to compare the position of man m to the position of man m' in w's array within the women's preference array, which would make the step of determine if w prefers m or m' constant time O(k). The women's preference array is merely the inverse of the man's preference array.

The section overall, I would give a solid 9 out of 10 on the readable scale. The progression from refreshing the reader on lists and arrays to using them to implement the Gale-Shapley algorithm was extremely helpful and definitely made the implementation section of the reading easy to comprehend. Additionally, the implementation section of the reading was broken up into four parts, which were the main actions within the while loop organized in the order that the steps would be visited. The only reason the section is not a 10 is because I had to re-read the women's presences list description twice to be able to fully comprehend its implementation.

2.4 Common Runtimes

Summary

Linear time happens when an algorithm spends a constant amount of time on each object. Quadratic time arises when nested loops are involved; both loops are O(n) so you multiply the runtimes together to get O(n^2). O(nlogn) comes about when typically in a divide and conquer situation: the algorithm splits the input into two pieces, solves each piece recursively, then merges the results together into a single output. O(logn) happens when an algorithm divides the active portion of the possible input in half on each iteration. O(n^k) occurs when for any constant k when we search over all subsets of size k. O(2^n) is the runtime of a natural brute-force search algorithm, and O(n!) comes about when the algorithm operates on each possible permutation of inputs.

Notes

Most problems have a natural “search space”. A unifying theme in algorithm design is the search for algorithms whose performance is more efficient than a brute-force enumeration of the search space. When we get a new problem, we have to think about two different bounds: the runtime we hope to achieve and the size of the problem’s natural search space (the runtime of a brute-force search algorithm).

Linear Time

O(nlogn) Time

Quadratic Time

Cubic Time

O(n^k) Time

Beyond Polynomial Time

Sublinear Time

Questions
Additional Information

In class, we didn't really talk about O(2^n) or O(n!) in too much depth. It was really interesting and helpful to see that in the reading. Until doing the reading, I wasn't quite sure why O(2^n) was such a common or important runtime, but now it is obvious why. It is the natural runtime of a brute-force search algorithm, which we try to improve upon in all of the algorithms that we create in this class. Moreover, the way by which a factorial runtime comes about was more explicitly explained in the reading than in class. Although, we would hope to never have an algorithm that runs this efficiently, it is still important to know how it comes about. Factorial runtime comes about as the number of ways to match up m items with n other items and the search space consists of all ways to arrange n items.

This section was a 9 out of ten in readability because the runtimes were explained very clearly. The only thing that got a little confusing was when they explained some of the examples. However, overall, the language was very precise and the common runtimes were explained in great detail.

2.5 Priority Queues

Summary

The priority queue consists of elements that have a priority value, also know as a key, and each time we select an element from the set, we take the element with the highest priority. We want to be able to add, delete, and select in O(logn) time. The heap is the best data structure to implement the priority queue. A heap is a balanced binary tree: the tree will have a root, and each node can have up to two children, a left and a right child. Elements in a heap are said to be in heap order if the key of any element is at least as large as the key of the element at it parent node in the tree; this is known as heap order. We will use an array to implement the heap; an element in the array with index i has a left child at position 2i and a right child at position i. When adding an element, we must employ the help of an algorithm called Heapify-up in which you add the new element at the end of the current elements in the list and compare it to its parent and swap them if the child's priority is higher. It is called recursively, until it gets to its correct spot. This runs in O(logn) time. When deleting way need to employ the help of an algorithm called Heapify-down in which you delete the item from the given position and then shift the items accordingly by identifying the highest priority child. This is also recursive and executes in O(logn). The heap combines the benefits of a sorted array and list.

Notes

Main goal in the book is to seek algorithms that improve qualitatively on brute-force search. We use polynomial-time solvability as the concrete formulation of this. Once we have an efficient algorithm to solve a problem, it can be possible to find improvements by being mindful of the implementation, and by possibly using more complex data structures. The priority queue is one of the most broadly useful sophisticated data structures.

The Problem

A Data Structure for Implementing a Priority Queue

Implementing Heap Operations

Heapify-Up

 Heapify-up(H, i):
    If i>1 then
       Let j = parent(i) = [i/2]
       If key[H[i]] < key[H[j]] then
          Swap the array entries H[i] and H[j]
          Heapify-up(H, j)
       Endif
    Endif

Heapify-Down

 Heapify-down(H, i):
    Let n = length(H)
    If 2i > n then
       Terminate with H unchanged
    Else if 2i < n then
       Let left = 2i, and right = 2i+1
       Let j be the index that minimizes key[H[left]] and key[H[right]]
    Else if 2i = n then
       Let j = 2i
    Endif
    If key[H[j]] < H[i]] then
       Swap the array entries H[i] and H[j]
       Heapify-down(H, j)
    Endif

Implementing Priority Queues with Heaps

Questions
Additional Information

After going to class and having Professor Sprinkle really break down the components of Heapify-down, what exactly was happening in that algorithm became way more clear. She really emphasize that the three different if/elif statements were for the cases of how many children a node had. If 2i > n then meant that an element had no children. Else if 2i < n then meant that an element had two children. Finally, Else if 2i = n then meant that an element had one child. The reason for having Heapify-down also became more clear after going to class. It is now apparent that Heapify-down is there to fill the hole that deleting an element results in whilst maintaining the heap order.

This section's readability was an 8 out of 10. The progression from the definitions of priority queues and heaps to the ultimate explanation was very precise and natural. The only reason that it wasn't a 10 is because it took me a while to fully grasp the concepts of Heapify-up and Heapify-down. Ultimately, the section used very clear language and precise explanations of priority queues and heaps, and it included a lot of definitions and explanations of properties of different data structures, which were super helpful in going back over my notes.