This is an old revision of the document!


Chapter 2 – Basics of Algorithm Analysis

My notes on the assigned sections of Chapter 2 of Algorithm Design by Jon Kleinberg and Éva Tardos. This chapter details the resource requirements for algorithms, talking about the time and space that they use, later developing run-time bounds for some basic, popular algorithms.

2.1 – Computational Tractability

Section 1 of chapter 2 attempts to define what efficiency is in terms of an algorithm. The initial proposed definition of efficiency from the book is when an algorithm is “implemented, it runs quickly on real input instances”. Bad algorithms can run fast with small test cases, and good algorithms can run slowly if they are coded poorly. Furthermore, this definition doesn't take into account how an algorithm scales with increasing input. So, a second definition is proposed: “an algorithm is efficient if it achieves qualitatively better worst-case performance, at an analytical level, than brute-force search”. So, we use polynomial time as a definition of efficiency. With polynomial time, when the input size increases by a constant factor, the algorithm should slow by a constant factor C. “If the input size increases from N to 2N, the bound on the running time increases from cN^d to c(2N)^d”. This marks a slow-down of a factor of 2^d. The third definition of efficiency says that polynomial time is efficient. With large constants or high exponents, polynomial time won't run efficiently.

This section was very readable, and I would give it a score of 10/10 on both readability and my interest in it.

2.2 – Asymptotic Order of Growth

This section covers the ways in which we can classify an algorithm's order of growth. This classifiers are O, Ω, and Θ. O(.) defines the asymptotic upper bound for an algorithm, Ω(.) defines the asymptotic lower bound for an algorithm, and Θ defines the asymptotically tight bound for an algorithm. An asymptotically tight bound means that O(.) == Ω(.). Asymptotic growth rates have many properties including transitivity: if f = O(g) and g = O(h), then f = O(h) and if f = Ω(g) and g = Ω(h), then f = Ω(h). They also interesting results when you add 2 functions, like f = O(h) and g = O(h), then f + g = O(h). For polynomials, asymptotic rate of growth is determined by their higher order term. Algorithms can be polynomial time even if they aren't written as n raised to an integer power (like O(nlogn)). For logarithms, “the base of the logarithm is not important when writing the bounds in asymptotic notation”. Finally, with exponentials, the terminology gets very sloppy, but when someone refers to an algorithm as exponential, then they mean that the running time grows as fast as some exponential function.

Overall I thought this section was pretty readable and very interesting. I'd give it a 7/10 for readability and a 10/10 for how interesting I thought it was. A couple of the proofs were a bit difficult to follow, but with a slow, careful read I was able to understand them.

2.3 – Implementing the Stable Matching Algorithm Using Lists and Arrays

courses/cs211/winter2018/journals/bairdc/chapter2.1516667980.txt.gz · Last modified: by bairdc
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0