Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
courses:cs211:winter2012:journals:virginia:chapter2 [2012/01/16 03:50] – created lovellv | courses:cs211:winter2012:journals:virginia:chapter2 [2012/01/25 01:53] (current) – [Section 5: Priority Queues] lovellv | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Chapter 2 ====== | ====== Chapter 2 ====== | ||
- | My notes on Chapter 2 readings | + | This chapter explains how to analyze algorithms by measuring how their resource requirements scale with increasing input size. |
===== Section 1: Computational Tractability ===== | ===== Section 1: Computational Tractability ===== | ||
- | * Summary | + | We want to find efficient algorithms, but what does efficient mean? We need to be more specific than just saying it runs fast or its better than brute-force search so we will consider an algorithm efficient if it has a polynomial worst-case running time. This means that for an input of size N, the worst-case running time is bounded by cN^d, where c and d are positive constants. |
- | * Insights I have | + | |
- | * The questions I have | + | Readability: |
- | * How readable section was | + | |
===== Section 2: Asymptotic Order of Growth ===== | ===== Section 2: Asymptotic Order of Growth ===== | ||
- | | + | This section explains how to express the growth rate of running times. |
- | * Insights I have | + | |
- | * The questions I have | + | **Definitions |
- | * How readable | + | |
+ | Upper bounds: T(n) is O(f(n)) if T is bounded above by a constant multiple of f, that is T(n)<= c*f(n) eventually. | ||
+ | |||
+ | Lower bounds: T(n) is Ω(f(n)) if T is bounded below by a multiple of f, that is T(n)>= e*f(n) eventually. | ||
+ | |||
+ | Tight bounds: T(n) is Θ(f(n)) if T is O(f(n)) and Ω(f(n)). | ||
+ | |||
+ | **Properties of O, Ω, and Θ** | ||
+ | |||
+ | Transitivity: | ||
+ | |||
+ | Sums of Functions: If i is an integer, f1, f2, ..., fi, h are functions and each fi = O(h), then f1 + f2 + ... + fi = O(h) | ||
+ | |||
+ | **Common Functions** | ||
+ | |||
+ | Polynomial, logarithmic and exponential functions are commonly seen. A polynomial' | ||
+ | |||
+ | Readability: | ||
+ | |||
+ | ===== Section 3: Implementing the Stable Matching Algorithm ===== | ||
+ | |||
+ | To analyze the running time of an algorithm, we need to think about how we would implement it. Two important data structures we might use are arrays and (linked) lists. | ||
+ | |||
+ | We know the G-S algorithm terminates in at most n^2 iterations so if we want O(n^2) running time, each iteration must run in constant time. In each iteration we: | ||
+ | |||
+ | - Identify a free man, m | ||
+ | - Identify m's highest ranked woman w who he has not yet proposed to | ||
+ | - Determine if w is engaged and to whom | ||
+ | - Determine if w prefers her current partner to m | ||
+ | |||
+ | We can implement each of these steps in constant time by: | ||
+ | |||
+ | - Maintaining a list of free men | ||
+ | - Maintaining an array " | ||
+ | - Maintaining an array " | ||
+ | - Creating an array " | ||
+ | |||
+ | This allows us to implement the G-S algorithm in O(n^2) time. | ||
+ | | ||
+ | Readability: | ||
+ | ===== Section 4: A Survey of Common Running Times ===== | ||
+ | |||
+ | Here are some common running times and examples of algorithms with these running times. | ||
+ | |||
+ | | ||
+ | * **O(nlogn) time**: This is the running time of an algorithm that splits its input in half, recurses over them and combines the solutions in linear time. Mergesort is an O(nlogn) algorithm (p 50). | ||
+ | * **Quadratic O(n^2) time**: This is the running time of an algorithm with two nested loops, such as finding the pair of points that are closest together in the plane (p 51). | ||
+ | * **Cubic O(n^3) time**: This is the running time of an algorithm with three nested loops, such as finding the pairs of disjoint sets when given n sets (p 52). | ||
+ | * **O(n^k) time**: This is the running time of searching over all subsets of size k of a set of size n. | ||
+ | * **O(2^n) time**: 2^n is the total number of subsets of an n-element set. Finding a independent set of maximum size is an O(2^n) algorithm (p 54). | ||
+ | * **O(n!) time**: O(n!)> | ||
+ | * **Sublinear time**: Binary search has O(logn) running time since it discards a fraction of the input on each iteration. | ||
+ | |||
+ | Readability: | ||
+ | ===== Section 5: Priority Queues ===== | ||
+ | |||
+ | This section | ||
+ | |||
+ | We can implement each queue operation in O(logn) time using a heap. A heap combines the benefits of a sorted array and a list. A heap can be thought of as a balanced binary tree where the key of a parent is always <= the keys of its children. | ||
+ | |||
+ | Thus, if we implement a priority queue with a heap, we get O(n) time to start the queue, O(logn) time to insert, O(1) time to find the minimum key, O(logn) time to delete and O(logn) to extract the value with minimum key. | ||
+ | |||
+ | Readability: |