This is an old revision of the document!
Table of Contents
Chapter 2
Section 1: Computational Tractability
The first major question that needs to be answered in discussing algorithms is how to define what an efficient algorithm is. At first, it can be defined as an algorithm that, when implemented, runs quickly on real input instances. While this definition does highlight an important characteristic of an efficient algorithm, it does not fully encapsulate what it means to be efficient. The first missing piece of information in this definition is where/how well an algorithm is implemented. If an algorithm is run on a very fast processor even if it is poorly designed, it may still run quickly. Oppositely, a good algorithm may still run slowly if it is poorly implemented. Another uncertainty in this definition is what a “real” input instance is, as there is no bound as to what this defines. A final issue with this definition is that it doesn't discuss the scalability of an algorithm. When analyzing an algorithm, it is best to look at the worst-case running time. The best-case running time rarely occurs, and when you need a guaranteed performance time, only the worst-case running time can provide that. For an average-case running time, the same issue applies, along with the notion of what actually defines “average case”. A second possible definition of an efficient algorithm is an algorithm that achieves qualitatively better worst-case performance, at an analytical level, than a brute-force search. A brute-force search first obtains all possible solutions and then looks for a solution that satisfies the requirements. A more specific definition of efficiency further constrains the limitations by saying an algorithm is efficient if it has a polynomial run time. This means that it has a run time proportional to n^k, where k is a constant and n is the size of the input set. Defining such a specific definition of efficiency allows negatability, which means that it becomes possible to state that there is no efficient algorithm for a particular problem. To me, this means that there are simply some problems for which no efficient algorithm is achievable. Overall this section was generally understandable, a 7/10.
Section 2: Asymptotic Order of Growth
Computational Tractability is based upon our ability to define the worst-case run time of an algorithm given an input set of size n. The algorithm's run time the grows at a rate that is at most proportional to some function f(n), which becomes the running time bound for the algorithm. When defining the bounds of the run time for an algorithm, while defining it in very concrete terms provides the most precise definition, it can be a very exhaustive activity, and provides unnecessary information. When we define the run time of an algorithm, we generically base it off of the number of steps in a “pseudo-code specification” of the algorithm, each of which unfolds into a fixed number of primitive steps. When looking at the bounds of the run-time of an algorithm, there are 3 bounds. Asymptotic Upper bounds, Asymptotic lower bounds, and asymptotically tight bounds. Each of these are defined with a different symbol: O, Ω, and Θ respectively. The upper and lower bounds do not specify the exact growth rate of the function that defines the running time of the algorithm; they only express the bounds. These bounds can be as specific or as generic as you desire, although generally we are looking for the most specific. The tight bounds specifies the exact growth rate of the function, and is a conclusion we can draw if O(f(n)) = Ω(f(n)). Understanding this notation wasn't very difficult for me as I had had experience with all three definitions going all the way back to high school, but the formal proofs and notation for the definitions was made much more clearer once I had learned about it in class. Understanding that these bounds are applied only for a certain input set size and above, and may not apply across every single input size was something I had not thought of before. The main functions that come up repeatedly in the analysis of algorithms are logarithmic(logn), polynomial (n^k), and exponential (k^n). Logarithmic functions grow more slowly than polynomial functions, which grow more slowly than exponential functions. My prior experience and understanding of run-time analysis made this section much easier to understand, however understanding the formal definitions became much clearer after learning about it in class. I would give this section a 9/10.
