This is an old revision of the document!


Chapter 2

Section 2.1

The goal of section 2.1 is to define just what exactly makes an algorithm efficient and how to measure that efficiency. The section starts with a very broad and subjective definition of efficiency and goes on to explain that while the first definition makes logical sense, it's wording is too loose and thus hard to measure quantitatively. The second definition makes progress by defining an efficient algorithm as one that performs better than a brute-force search. Brute-force searches will always find an answer (if one can be found), but they are much more likely to achieve the worst case run-time and, in addition, is an “intellectual cop-out”. The third and final definition focuses on defining efficiency based on the mathematical principal of polynomial time.

Polynomial time is a way to express how an algorithm scales as the input size increases. An algorithm meets the definition of polynomial time if it scales by some constant factor c when the input size increases. So, for example, when some algorithm (defined polynomially as cN^d) has its input increased by a factor of 2–N becomes 2N–the runtime turns out to be c * 2^d * N^d, which is only a slowdown of 2^d.

While the book admits there are potentially better ways to define the efficiency of algorithms, years of trial and error have found that defining efficiency in terms of polynomial time is pretty good. Using this definition, it becomes possible to find that some particular algorithm cannot perform in polynomial time and is thereby inefficient.

This section was extremely logical and did not include too much of the mathematical syntax that always ends up confusing me. I rate this section an 8 for interestingness and a 9 for readability.

Section 2.2

Section 2.2 introduces O, Ω, and Θ notation and how they can be used to describe the maximum, minimum, and exact run time of algorithms as their inputs scale up or down. O(f(n)) for some function f(n) means that the algorithm in question will run at most f(n) times for any value of n. Ω(f(n)) means that the algorithm will run at least f(n) times for any value n. While I understood both O and Ω notation, I am not completely sure I have Θ notation down pat.

As I understood the section, it seemed that if some algorithm is both O(f(n)) and Ω(f(n)), then it is also Θ(f(n)). It seems like this means that the algorithm runs both at most and at least f(n) times for any value n, which implies that the exact run time of the algorithm is always known for any value of n.

The chapter goes on to prove some of the basic identities of the three notations. They go on to prove transitivity and sums of functions, then show that each notation can accommodate polynomials, logarithms, and exponents. I won't bother to describe each of the proofs in detail since they are already supplied in the book and I wouldn't want to bore anyone with the complex technical jargon.

The beginning of the chapter was extremely interesting, I did not know two other notations for the run-time of algorithms even existed. The proofs, however, were relatively complex and I would be lying if I said that I understood each one of them after reading through this section. However, I bet knowing that each notation has the capabilities proved in this section will prove invaluable in the future when we write our own proofs. I give this section a 7 for how interesting but only a 3 for readability.

Section 2.3

Section 2.3 goes over what we discussed in class, that is, how to implement the Gale-Shapely algorithm using lists and arrays. More specifically, the book uses fixed-size arrays and linked lists, and one 2D array to ensure the algorithm runs at O(n^2). The fixed-size arrays are used to implement the lists of who proposed to who and the final list of engagements. Arrays make sense in this situation since each list will only hold a maximum number of elements and array's indexing capabilities perform at O(1) so that checking if a man has proposed to a woman runs in constant time. Linked lists are used to implement the list of unmatched men while the 2D array is used to represent each man's preferences. The same data structures apply to women and their preferences.

I found reading the book to be more understandable than the in-class discussion we had on Friday. This isn't to say that the discussion of Friday wasn't helpful, only that it made more sense reading the chapter after we had our discussion. By the end of class on Friday, I felt I had a good grasp on the data types used to implement the algorithm. After reading the chapter, I understood the reasoning behind each data structure.

I would say this chapter was 6 on an interesting scale (seeing as we had already discussed it), but an 8 on the readability scale. Breaking each function down by how it was implemented made a lot of sense to me. While we did do the same thing in class, it was a bit harder to follow simply because erroneous answers kept being proposed which threw me off a little bit.

Section 2.4

Section 2.4 describes the most commonly found algorithmic run-times, which include; O(n, O(n^k) , O(nlogn), O(logn). It also discusses some of the largest run-time found, which are O(2^n) and O(n!). For all of the runtimes, example problems were given and most had examples of the actual algorithms themselves. While I have seen most of the runtimes before, it was good to reinforce my knowledge of them after our in-class discussion.

The one algorithm runtime I have always struggled with the most has been O(nlogn). Reading this chapter helped me garner a much better understanding of why this seemingly strange runtime arises so often and how it is implemented in different settings.

I would give this section a 6 on the interesting scale and an 8 on the readability scale.

courses/cs211/winter2018/journals/cantrella/chapter_2.1517185171.txt.gz · Last modified: by cantrella
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0