Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
courses:cs211:winter2012:journals:richard:home [2012/01/24 06:15] – marmorsteinr | courses:cs211:winter2012:journals:richard:home [2012/03/28 00:01] (current) – marmorsteinr | ||
---|---|---|---|
Line 3: | Line 3: | ||
* [[courses: | * [[courses: | ||
* [[courses: | * [[courses: | ||
+ | * [[courses: | ||
+ | * [[courses: | ||
+ | * [[courses: | ||
+ | * [[courses: | ||
+ | * [[courses: | ||
- | The authors explain the importance of algorithms--and moreover, algorithmic thinking. They break down "the algorithmic enterprise" | ||
- | **Chapter 1** | ||
- | The authors introduce the Stable Matching Problem. They explain a more complicated instance of the SMP first, involving companies with many slots seeking interns. This is to illustrate the potential real-life usefulness of a solution to the problem, I guess. | ||
- | |||
- | Then, they go do their " | ||
- | |||
- | Then, they present the Gale-Shapely algorithm to show that there is a stable matching for each possible set of preferences. They proceed to prove things things about the algorithm--for instance that women, once proposed to, always remain engaged; that women only move down their preference list as the algorithm progresses; and so on and so forth. It all builds up to the proof that the matching returned by the algorithm is stable. | ||
- | |||
- | Then, for cases where more than one stable matching is possible, they explore a way to characterize the different stable matchings. On the way, they prove that every execution of the Gale-Shapely algorithm yields the same stable matching. Specifically, | ||
- | |||
- | Then, they briefly introduce the concept of graphs, and proceed to introduce five " | ||
- | |||
- | |||
- | **2.1** | ||
- | |||
- | The authors start by talking about how, in designing algorithms, efficiency is key. But what exactly does efficiency mean? Well, that is not an easy question. In fact, it's so difficult a question to answer that it took the authors of this book--with their super-smart brains and their Ph.Ds from M.I.T. and Eötvös--three whole tries to make up their minds. Or maybe that's just them doing what they said they'd do in the preface and approach algorithm design like a " | ||
- | |||
- | **2.2** | ||
- | |||
- | First, the authors state that, in measuring efficiency of an algorithm, it's not really important to count the exact number of steps you have for a given input size, it's sufficient to describe it in less detail than that. Then, they introduce the idea of asymptotic upper bounds--or big O...and define it mathematically, | ||
- | |||
- | They do a similar thing for the omega--the asymptotic lower bound--definition, | ||
- | |||
- | Then they run through, (and partially prove) properties of the bounding functions: transitivity, | ||
- | |||
- | Next, they run through noting properties of the different types of asymptotic bounds. For instance, the bound of polynomial functions is always determined by the highest exponent thingy in the function. They also point out that not all polynomial types are bounded by n raised to an integer power, like there can be O(n^1.59) | ||
- | Then, for logarithmic time they point out it doesn' | ||
- | |||
- | This isn't true for exponentials, |