This is an old revision of the document!


Richard's Journal

First Two Pages of Preface:

The authors explain the importance of algorithms–and moreover, algorithmic thinking. They break down “the algorithmic enterprise” into two steps: “getting to the mathematically clean core”, and identifying appropriate “algorithm design techniques,” whatever those are. They explain that their approach in the book is to present algorithm design as more of a thought process, rather than taking “the most direct route from problem statement to algorithm.”

Chapter 1 The authors introduce the Stable Matching Problem. They explain a more complicated instance of the SMP first, involving companies with many slots seeking interns. This is to illustrate the potential real-life usefulness of a solution to the problem, I guess.

Then, they go do their “getting to the mathematically clean core” thing which they described in the preface, and abandon the companies with many slots thing, and simplify the problem to stable matchings of n men with n women. They clearly define their terms “matching” and “perfect matching” and “instability” and “stable matching,” with respect to this example. Then they set up an example case, and show it's possible to have two stable matchings for a given set of preferences.

Then, they present the Gale-Shapely algorithm to show that there is a stable matching for each possible set of preferences. They proceed to prove things things about the algorithm–for instance that women, once proposed to, always remain engaged; that women only move down their preference list as the algorithm progresses; and so on and so forth. It all builds up to the proof that the matching returned by the algorithm is stable.

Then, for cases where more than one stable matching is possible, they explore a way to characterize the different stable matchings. On the way, they prove that every execution of the Gale-Shapely algorithm yields the same stable matching. Specifically, it always gives the men their “best valid partners” and the women their “worst valid partners.” Darn patriachy.

Then, they briefly introduce the concept of graphs, and proceed to introduce five “representative problems” in terms of graphs, in ascending difficulty, explaining the basic concepts and special difficulties underlying each.

2.1

The authors start by talking about how, in designing algorithms, efficiency is key. But what exactly does efficiency mean? Well, that is not an easy question. In fact, it's so difficult a question to answer that it took the authors of this book–with their super-smart brains and their Ph.Ds from M.I.T. and Eötvös–three whole tries to make up their minds. Or maybe that's just them doing what they said they'd do in the preface and approach algorithm design like a “thought process.” It reminds me a little bit of Goldilocks and the Three Bears. The first working definition of efficiency (runs fast on real inputs) is too inflexible. The second working definition (qualitatively better worse-case performance than brute-force) is too vague. And the third proposed definition is just right…well not really. The authors point out that there are of course, exceptions to the definition they settle on (running an algorithm in “polynomial time.”) And that technically n^200 shouldn't be considered efficient. But they say it usually works, so that's a good definition. Plus it's negatable.

2.2

First, the authors state that, in measuring efficiency of an algorithm, it's not really important to count the exact number of steps you have for a given input size, it's sufficient to describe it in less detail than that. Then, they introduce the idea of asymptotic upper bounds–or big O…and define it mathematically, and give an example to illustrate. They explain that you can have many upper bounds…big O isn't necessarily going to be the smallest upper bound for the algorithm.

They do a similar thing for the omega–the asymptotic lower bound–definition, illustration. And then they put the omega together with the big o dude and introduce the big theta tight bound, and explain why that's a useful thing to have because they are so precise. They show how to get it by taking a limit.

Then they run through, (and partially prove) properties of the bounding functions: transitivity, that adding two functions gives a function of the same order, that a function which is an asymptotic upper bound for another function is a tight bound for the sum of that function and itself.

Next, they run through noting properties of the different types of asymptotic bounds. For instance, the bound of polynomial functions is always determined by the highest exponent thingy in the function. They also point out that not all polynomial types are bounded by n raised to an integer power, like there can be O(n^1.59) Then, for logarithmic time they point out it doesn't really matter what the base of the log is because you can always translate between logs. O(log n) is sufficient.

This isn't true for exponentials, however. Just saying it's exponential time is sloppy because technically all exponentials are different. But that doesn't usually matter too much, because exponential is really slow anyway.

courses/cs211/winter2012/journals/richard/home.1326900102.txt.gz · Last modified: 2012/01/18 15:21 by marmorsteinr
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0