====== Chapter 2 ====== ===== 2.1 Computational Tractability ===== Summary The author talks about why worst-case approach the complexity is used rather than its seemingly attractive alternative---average performance. It is hard to define what random input should we generate to evaluate the complexity. two important definitions: Proposed Definition of Efficiency (2): An algorithm is efficient if it achieves qualitatively better worst-case performance, at an analytical level, than brute-force search. ==> quantify "qualitatively better" ==> polytime Proposed Definition of Efficiency (3)" An algorithm is efficient if it has a polynomial running time. The author concludes that there is a final, fundamental benefit to making our definition of efficiency so specific: it becomes negatable. It becomes possible to express the notion that there is no efficient algorithm for a particular problem. Readable: 7/Interesting: 5 ===== 2.2 Asymptotic Order of Growth ===== Summary the way we quantitatively measure the complexity is intrinsically based on the worst case analysis. We provide a upper bound where the runtime is bounded by some other c*F(n) and also a lower bound where the runtime function is lower bounded by some Omega(n). The author then talks about some asymtotic bounds for common functions such as poynomials, exponentials, and logarithms. And showed their asymtotic bounds. Readable: 7/Interesting: 2 ===== 2.3 Implementing the Stable Matching Algorithm Using Lists and Arrays ===== SUMMARY This section covers the discussion as to the underlying data structure to implement the stable matching algorithm. The discussion is essentially about the tradeoffs of using a list versus an array. Comparison: Array: fast, constant time to access an element given an index. List: more flexible, but slower to access each element. O(N). Array Add element O(1) Delete element O(N) Find element O(N) List Add element O(1) Delete element O(1) Find element O(N) It takes O(N) to convert from an array to a list and the same the other way around. Refer back to the 1/17 slide for the breakdown of the running time of O(N²) of the the total running time. Interesting/Readability: 8 5 ===== 2.4 A Survey of Common Running Times ===== Summary When trying to analyze a new algorithm, it helps to have a rough sense of the "landscape" of different running times. So this section covers some of the common types of running times that we are going to keep refering back to later. **Linear time: O(N)** An example would be iterating through a collection and perform some constant operation on each elment of that collection. Examples: * Computing the max of a list. * Merging the two sorted list. Only have to go through the two lists once to get it done. **O(NlogN) Time** Examples * Mergesort --- divide into logN pieces of small problems and for each one perform the merge which takes N. * Time stamp problem **Quadratic Time O(N²)** Exaples: * Matching problem. * calculating distances between any given points. **Cubic Time O(N³**) For each set Si ,For each other set Sj, For each element p of Si, Determine whether p also belongs to Sj. **O(N^k) Time** Independent set of size k. Given a graph,are there k nodes such that no two are joined by an edge? Note: Summary of running times. P20 slide 1/19 Interesting/Readability: 8 5 ===== 2.5 A More Complex Data Structure:Priority Queues ===== Motivation: **After overcoming higher-level obstacles, lower-level implementation details can improve runtime.** Summary Priority Queue revisited: Maintains a set of elements S • Each element v ∈ S has a key(v) for its priority  Smaller keys represent higher priorities  Supported operations • Add, delete elements • Select element with smallest key Implementing data structures: for sorting If we use unsorted lists: N² -why? If we use array: logn*N² If we use heaps: NlogN Implementation of Heaps: Know the implementation of the **Heapify-up** and **Heapify-down** with an array based implementation. Interesting/Readability: 7 7