This is an old revision of the document!


Alyssa's Journal

Preface

Algorithmic ideas and notions are not just limited to long-standing, well-known problems. They span a wide range of topics and subject fields. Algorithms have many applications beyond computer science, but they do allow for a better view of computer science in general. Despite how essential they are to computer science, they do not always come in the form of a precise, mathematical question. Oftentimes, they are surrounded by complicated, problem-specific details that are sometimes superfluous. Because of this, there are two very important goals when it comes to approaching algorithms: 1) to get to the mathematically clean core of the problem and 2) to identify the appropriate algorithm design techniques. Once this happens, algorithms will not only provide the solutions to problems, they will form a language that allows you to cleanly express the underlying questions. I found this section pretty interesting, as it gave a nice overview as to what algorithm analysis was and why we should study it. I would rate it a 10 (if 10 is the best).

Chapter 1.1: Stable Matching Problem

The Stable Matching Problem arose from a situation in which two economists (Gale and Shapley) posed the question: Could one design a college admissions process, or a job recruiting process, that was self-enforcing? What they meant was, is it possible to match employers and applicants in such a way that neither the employer nor an applicant would benefit from trading with another employer/applicant. For example, If person 1 is working for job A but would rather have job B and job B employed person 2 but would prefer person 1, they could just switch and be happy. That is an unstable match. However, this problem is really complicated because you have to take into account how many applicants there are, what their preference is, how many job slots there are, and what each employer’s preference is. To simplify this, we can look at a pool of males and females and match them together. Each male and female comes up with their preference list, and at the end of the matching, each member is paired with exactly one individual of the opposite gender.

Algorithm Outline:

• Initially everyone is single. • An unmarried man, m, chooses the woman, w, who ranks highest on his preference list that he hasn’t proposed to and proposes to her • If the woman, w, is single, she accepts. If not, she looks at her own preference list and o If the man, m, ranks higher than her current partner, mcurrent, then she leaves him and becomes engaged to m, leaving mcurrent single o Otherwise, she rejects m’s proposal and m remains single • The algorithm terminates when no one is single and every woman has been proposed to This algorithm returns a perfect matching—one in which no one is single and every individual is paired with exactly one individual of the opposite gender. It also terminates after at most n2 iterations (worst case: every man proposes to every woman—n x n = n2). We can prove that the G-S algorithm returns a perfect, stable matching by contradiction. Suppose the matching isn’t perfect; i.e. there is a man, m, who is single at the end of the algorithm. However, m would have had to propose to every woman in order for the algorithm to terminate, so there must be a woman, w, who is still single, a contradiction. Now suppose the matching isn’t stable. So there are pairs (m,w) and (m’,w’) such that m prefers w’ to w, w’ prefers m to m’. So either m proposed to w first, leading to their engagement, which means that m does not prefer w’ to w, or w’ broke off her engagement for m’, which means that w’ does not prefer m to m’, a contradiction. So the G-S algorithm does provide a perfect, stable matching. It is also important to note that all executions of this algorithm yield the same matching. Although this algorithm does provide a clean solution, it does not take fairness into account. Throughout the algorithm, men start out with their top choice woman and, as the algorithm proceeds, he can bounce between being engaged or singe, and his prospects worsen. Women, on the other hand, are engaged the entire time (after their first proposal) and their prospects improve as the algorithm proceeds. However, in the end, men are paired with their best valid partner and women are paired with their worst valid partner. So, in general, the side that does the proposing in the G-S algorithm ends up with the best possible stable matching (from their perspective) and the other side ends up with the worst possible stable matching. I found this section pretty redundant. We covered all of this in class (very thoroughly) so I had a pretty good grasp of the problem and what the algorithm looks like. Had I been confused at all, this definitely would have cleared things up. It was very easy to understand, so I’d rate it a 10.

Chapter 2.1: Computational Tractability

A simple definition of efficiency when it comes to algorithms is: An algorithm is efficient if, when implemented, it runs quickly on real input instances. However, this doesn’t really say much about how well the algorithm runs or what exactly an input instance is. So, it’s best to focus on the worst-case running time which will serve as a bound on the largest possible running time an algorithm can have for all inputs of a given size N. There are some cases where the algorithm performs well in most instances and only a few situations make it very slow, but this is not common. Average cases are often too “random” to provide a lot of insight. So, for the most part, the worst case running time is the most illuminating. Another definition of efficiency is: An algorithm is efficient if it achieves qualitatively better worst-case performance, at an analytical level, than brute-force search. This is a better definition, but what is “qualitatively better performance?” In the 1960s, the question of how to quantify the notion of reasonable running time arose. The issue is that if the input size, N, increased by a constant factor, the algorithm should only slow down by a constant factor, rather than multiplicatively. So, for an input size increasing from N to, say, 2N, an efficient algorithm’s running time slows down by a factor of 2d. This running time is called polynomial time. This leads to a third definition of efficiency: An algorithm is efficient if it has a polynomial running time. Some examples of this running time are n, nlogn, n2, and n3. With such a specific definition of efficiency, it is possible to prove that there is no efficient algorithm for a particular problem.

I think this section was pretty difficult and confusing, but I think I may have gotten the general idea. I understand why we use worst-case running times and I think I see why polynomial time is the best (it’s definitely better than many of the alternatives i.e. exponential time). I would rate this at a 6.

Chapter 2.2: Asymptotic Order of Growth

If we can express an algorithm’s worst-case running time on inputs of size n as growing at a rate that is proportional to some function f(n), then that function becomes a bound on the running time of the algorithm. Counting the number of steps an algorithm executes is essentially meaningless both because we will often be using pseudo-code and because we want to classify broad classes of algorithms so we need a more general way of determining running time. So, we want to express the growth rate of running times in a way that is insensitive to constant factors and low-order terms. • T(n) – worst-case running time of a certain algorithm on an input of size n • O(f(n))— asymptotic upper bound, the order of f(n) • Ω(f(n)) – asymptotic lower bound • Θ(f(n))—asymptotically tight bound, when T(n) = O(f(n)) = Ω(f(n)) Some properties of asymptotic growth rates:  Transitivity: if a function f is asymptotically upper-bounded by a function g and if g is in turn asymptotically upper-bounded by h, then f is asymptotically upper-bounded by h. This property holds similarly with lower bounds. o If f = O(g) and g = O(h), then f = O(h) o If f = Ω(g) and g = Ω(h), then f = Ω(h)  Sums of functions: if we have an asymptotic upper bound that applies to each of the two functions f and g, then it applies to their sum o If f = O(h) and g = O(h), then f + g = O(h) o Let k be a fixed constant and let f1,f2,…,fk and h be functions such that fi = O(h) for all i. Then f1 + f2 +…+ fk = O(h). o If f and g are two functions (taking nonnegative values) such that g = O(f). The f + g = θ(f), i.e. f is an asymptotically tight bound for the combined function f + g.

This was definitely the most difficult part of the reading for me. Maybe it is because I missed Monday’s class. Hopefully hearing this again in class will clear things up for me because I was completely lost in all of the variables and their relationships to one another. I’d rate this section a 3.

courses/cs211/winter2014/journals/alyssa/home.1389812582.txt.gz · Last modified: 2014/01/15 19:03 by hardnetta
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0