Preface Summary The preface introduced the textbook's content, outlining each chapter and the format in which content would be explained. In addition to providing practice problems, the book contains solved exercises in each chapter that step through the problem solving process. The book also has “pedagogical features” to help make it a good teaching resource. Each chapter is divided into sections: the problem, designing the algorithm and analyzing the algorithm. This helps separate different parts into major steps to approaching and solving particular algorithms. A chapter-by-chapter synopsis follows. Chapter one introduces general algorithms, such as the stable matching problem. Chapters two and three present some review from earlier computer science classes and basic data structures. Chapters four to seven cover greedy algorithms, divide and conquer, dynamic programming and network flow, and chapters eight and nine cover computation intractability. From there, chapters ten and 12 explain three techniques for approaching computationally intractable problems, including identification of structure special cases, approximation algorithms and local search heuristics.

I give the preface a ten in terms of readability, because it gave a quick overview of the course that was enough to help me understand what the class will be about but not too long-winded or dense. 

Chapter 1 Summary Chapter 1 introduced our first problem, which is the stable matching problem. According to the book, this problem provides an example of the course’s themes. The stable matching problem was established by David Gale and Lloyd Sharpley in 1962 when they asked themselves if they could design a self-enforcing process for a college admissions or job recruitment. The book uses interns applying for jobs as the example situation. With the interns, the process was not self-enforcing because both the interns and employers were allowed to act in self interest, meaning interns could decline certain offers in favor of a better offer or employers could retract an offer in favor of a stronger candidate. The book then presented the conditions for a stable algorithm: Given a set of preferences among employers and applicants, can we assign applicants to employers so that for every employer E and every applicant A who is not scheduled to work for E, at least one of the following two things is the case? 1. E prefers every one of its accepted applicants to A 2. A prefers her current situation over working for employer E If this holds, the outcome is stable. In other words, each of n applicants applies to each of n companies and each company wants to accept a single applicant. We can use the matching of men to women to describe a stable matching algorithm:

M = a set of n men
W = a set of n women

M x W describes the sett of all possible ordered pairs • A matching S is a set of ordered pairs, each from MxW, with the property that each member of M and each member of W appears in at most one pair in S • A perfect matching S is a matching with the property that each member of M and each member of W appears in exactly one pair in S’ We can add preferences to the situation, with the following terms: • Each man ranks all the women • No ties allowed • Each woman ranks all the men A pair where both partners prefer someone else is an instability. We want our algorithm to create a set of marriages with no instabilities. A matching set is stable if: 1. It is perfect 2. There is no instability with respect to S

I’d give this chapter a 10 in terms of readability. The process for developing the stable matching problem was clearly outlined and I was able to follow it and understand it by the end. The class lecture reinforced the reading and drew out the most important points from the text. I didn’t come up with any specific questions after reading this section.

Chapter 2.1-2 Summary

Section 2.1 is about computational tractability, and included a discussion on the common themes and design principles shared by algorithms.  Specifically, the section discusses runtime and works toward a definition of efficiency, one that is “platform independent, instance-independent, and of predictive value with respect to increasing input sizes,” and using the stable matching problem as an example.  First, the book looks at the worst-case run time, which is the largest possible running time over all inputs of size N.  To determine how efficient the worst-case running time is, we can compare the results to a brute-force search of possible solutions.  The Stable Matching example, like many algorithms, has a “compact representation, implicitly specifying a giant search space,” meaning the brute-force method is too slow to account for all the possible matches.  
As an alternative, we look at polynomial time, where “when the input size increases by a constant factor—say a factor of 2—the algorithm should only slow down by some constant factor C.”  This concept leads to the final definition of efficiency, which says that “an algorithm is efficient if it has a polynomial running time.”
Section 2.2 describes asymptotic order of growth in terms of bounds.  “O” represents an upper bounds where T(n) is bounded by a constant multiple of f(n), which we represent as T9n) = O(f(n)).  The Ω notates asymptotic lower bounds where T(n) is at least a constant multiple of some function f(n), bounding T(n) from below.  Finally, Θ notates asymptotically tight bounds.   O, Ω and Θ share common properties of transitivity and additivity.  The book then describes running time in terms of polynomials, logarithms and exponentials, with exponentials growing the fastest, then polynomials, then logarithms.  I’m not sure how well I understood this section in the book.  Class discussion helped a little, but I don’t feel all that confident about describing bounds in terms of equations, as the book does, nor distinguishing the run time of lines of algorithms in terms of O notation.  I’m hoping the practice problems will help clarify these concepts for me.  I give this section an 8 in terms of readability, but only because I personally found the chapter more difficult to read, particularly the equations, but I like how it’s divided into sections so I at least understood what the book was trying to explain.
courses/cs211/winter2012/journals/shannon/entry_1_chapter_1_2.1-2.txt · Last modified: 2012/01/24 22:32 by mcgoverns
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0