Table of Contents

Chapter 2

Basics of Algorithm Analysis

This chapter talks about efficiency of algorithms. Not all algorithms are efficient in terms of their memory usage and speed. “Even bad algorithms can run quickly when applied to small test cases on extremely fast processors; even good algorithms can run slowly when they are coded sloppily.”

Computer Tractability

The worst-case run time for the Stable Matching example in the previous chapter is 2n^2 because there are two lists, M and W, each of size n and we go through each list once. However, there are n! possible perfect matchings and Brute-Force Search is much more bigger.

NOTE: In the book, there are some definitions on efficiency and a reason to why the particular definition is a good or a bad one. There is a also a table with run times of different algorithms on inputs of increasing size.

Asymptotic Order of Growth

Asymptotic Upper Bounds

Defined as T(n) = O(f(n)), meaning T(n) is O(f(n)) if there exist constants c > 0 and n0 >= 0 so that for all n >= n0, we have T(n) ⇐ c.f(n).

Asymptotic Lower Bounds

Defined as T(n) = Ω(f(n)), meaning T(n) is Ω(f(n)) if there exist constants ε > 0 and n0 >= 0 so that for all n >= n0, we have T(n) >= ε.f(n).

Asymptotic Tight Bounds

Defined as T(n) = Θ(f(n)), if it is both the upper bound and the lower bound, meaning, it is just the right bound.

Properties of Asymptotic Growth Rates

Asymptotic Bounds for Some Common Functions

Implementing the Stable Matching Algorithm Using Lists and Arrays

We do not need to program each and every program but we do have to know how data are to be represented and used in an algorithm so that we can limit the steps of computation. For example, we need to think about how to represent the men and women and their rankings in the Stable Matching algorithm.

Arrays and Lists

Arrays
Doubly Linked List

Implementation of the Stable Matching Algorithm

With this, we would be able to run the algorithm in O(n^2) time.

A Survey of Common Running Times

Linear

O(n) run time basically means going through each element of the data set.

O(n log n) Time

Basically splits the data set into halves, solves each subset recursively, and merges back together in linear time.

Quadratic Time

Need to go through the data set repetitively taking the each of the data and the next one.

Cubic Time

Need to check if each element of every set belongs to each of the other sets.

O(n^k) Time

We search over all subsets of size k, the same way as we did for quadratic time. (Algorithm on page 53)

Beyond Polynomial Time

Sublinear Time

A More Complex Data Structure: Priority Queues

Priority Queues are one of the broadly useful complex data type, which will be helpful to describe graphs. The men in the Stable Matching Algorithm can be represented in a Priority Queue as once he is engaged, he can be removed from the queue in constant time and if he becomes single again, he can be added back to the queue in constant time. It can also be used to sort a list (Proof in page 58).

Heap: To represent a Priority Queue

Heap is a balanced binary tree where the each of the children of each node is equal or larger than the node itself (for an ascending order of numbers). This can be represented in a list, where the first element is the element in the top node and each of the left and the right children for each next node is represented by 2i and 2i+1 respectively, where “i” is the index of the element in the queue.

When adding an element, it gets added to the next first empty space in the heap. If it is smaller than its parent, it gets swapped with its parent. This algorithm is called Heapify-up (Algorithm in page 61. Proof in page 62).

While deleting an element, the element to be deleted gets swapped with the last element in the heap and it moves up/down as required. Thus Heapify-down algorithm is needed (Algorithm in page 63. Proof in page 64).

Implementing Priority Queues with Heaps

Questions and Comments

This chapter was easily readable and the class discussions made it very clear. Thus, I have no questions on this chapter. Also, since most of the stuff I read in this chapter were already covered in CSCI 112, it wasn't anything new so maybe that was why I thought it wasn't “interesting” and was easy to understand.