Chapter 2

This chapter explains how to analyze algorithms by measuring how their resource requirements scale with increasing input size.

Section 1: Computational Tractability

We want to find efficient algorithms, but what does efficient mean? We need to be more specific than just saying it runs fast or its better than brute-force search so we will consider an algorithm efficient if it has a polynomial worst-case running time. This means that for an input of size N, the worst-case running time is bounded by cN^d, where c and d are positive constants.

Readability: 8

Section 2: Asymptotic Order of Growth

This section explains how to express the growth rate of running times.

Definitions of O, Ω, and Θ

Upper bounds: T(n) is O(f(n)) if T is bounded above by a constant multiple of f, that is T(n)⇐ c*f(n) eventually.

Lower bounds: T(n) is Ω(f(n)) if T is bounded below by a multiple of f, that is T(n)>= e*f(n) eventually.

Tight bounds: T(n) is Θ(f(n)) if T is O(f(n)) and Ω(f(n)).

Properties of O, Ω, and Θ

Transitivity: If f is O(g) and g is O(h), then f is O(h). This property also holds for lower and tight bounds.

Sums of Functions: If i is an integer, f1, f2, …, fi, h are functions and each fi = O(h), then f1 + f2 + … + fi = O(h)

Common Functions

Polynomial, logarithmic and exponential functions are commonly seen. A polynomial's rate of growth is determined by its highest order term. log n is bounded by every function n^x as long as x>0. Every exponential grows faster than every polynomial.

Readability: 7

Section 3: Implementing the Stable Matching Algorithm

To analyze the running time of an algorithm, we need to think about how we would implement it. Two important data structures we might use are arrays and (linked) lists. Arrays support constant time access to a given element, but search on an unsorted list is O(n). In lists, access to a given element i is O(i), but insertion and deletion are constant time. Lists are generally better for dynamic sets. We will use lists and arrays to implement the G-S algorithm.

We know the G-S algorithm terminates in at most n^2 iterations so if we want O(n^2) running time, each iteration must run in constant time. In each iteration we:

  1. Identify a free man, m
  2. Identify m's highest ranked woman w who he has not yet proposed to
  3. Determine if w is engaged and to whom
  4. Determine if w prefers her current partner to m

We can implement each of these steps in constant time by:

  1. Maintaining a list of free men
  2. Maintaining an array “Next” so Next[m] returns the woman man m will propose to next
  3. Maintaining an array “Current” so Current[w] returns the current partner of w or null
  4. Creating an array “Ranking” from our original preference array so that Ranking[w,m] returns the rank of m in w's preference list

This allows us to implement the G-S algorithm in O(n^2) time.

Readability: 7

Section 4: A Survey of Common Running Times

Here are some common running times and examples of algorithms with these running times.

  • Linear O(n) time: Running time is a constant factor times the size of the input. Computing the maximum and merging two sorted lists are O(n) algorithms (p 48).
  • O(nlogn) time: This is the running time of an algorithm that splits its input in half, recurses over them and combines the solutions in linear time. Mergesort is an O(nlogn) algorithm (p 50).
  • Quadratic O(n^2) time: This is the running time of an algorithm with two nested loops, such as finding the pair of points that are closest together in the plane (p 51).
  • Cubic O(n^3) time: This is the running time of an algorithm with three nested loops, such as finding the pairs of disjoint sets when given n sets (p 52).
  • O(n^k) time: This is the running time of searching over all subsets of size k of a set of size n. The Independent Set problem has O(n^k) running time (p 53).
  • O(2^n) time: 2^n is the total number of subsets of an n-element set. Finding a independent set of maximum size is an O(2^n) algorithm (p 54).
  • O(n!) time: O(n!)>O(2^n). n! is the number of ways to match up n items with n other items. It is also the number of ways to arrange n items in order. This running time emerges in the Traveling Salesman Problem (p 56).
  • Sublinear time: Binary search has O(logn) running time since it discards a fraction of the input on each iteration.

Readability: 6

Section 5: Priority Queues

This section explains how to implement priority queue, a data structure that supports addition, deletion, and selection of the highest priority element. Priority queues are useful for many things, including sorting a set of n numbers.

We can implement each queue operation in O(logn) time using a heap. A heap combines the benefits of a sorted array and a list. A heap can be thought of as a balanced binary tree where the key of a parent is always ⇐ the keys of its children. Finding the highest priority element in a heap is O(1) since the element with the smallest key is the root. Adding and removing from a heap is O(logn) if we use Heapify-Up and Heapify-Down.

Thus, if we implement a priority queue with a heap, we get O(n) time to start the queue, O(logn) time to insert, O(1) time to find the minimum key, O(logn) time to delete and O(logn) to extract the value with minimum key.

Readability: 7

courses/cs211/winter2012/journals/virginia/chapter2.txt · Last modified: 2012/01/25 01:53 by lovellv
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0