This is an old revision of the document!


Anne Bailey's Journal

Preface:

Algorithms unravel the secrets of every science. They problem solve through mathematical cores and identify the nature of the problem they are solving. The point of algorithms are to both provide solutions and to construct the framework underlying the problems they are used to evaluate. We see this framework while trying to develop the most efficient possible algorithms. The point of studying algorithms is to better understand how to efficiently answer questions as they are posed. I would consider this preface a 3/10 in terms of how interesting I find it. (3/10)

1.1: Stable Matching

A self-enforcing match is one in which agents acting in their own self interest could not break a match for mutual gain on each side. The idea developed by Gale and Shapely behind this algorithm is that individual self-interest will reinforce an algorithm to assure that any matches made will not be broken after all assignment have been doled out. It is easiest to approach a one-to-one match so that applicants and acceptances are equal and mutual. Though matching is just assigning a single set of outcomes, perfect matching means that we follow this one-to-one rule. Preferences are the basis of self-interest that influences stability; without preferences no one would care with whom they were matched and matching could be a rolling process in pursuit of convenience and efficiency alone. To match perfectly, we must start trying to match persons and giving every person the chance to break their initial agreement in favor of another person – who may either accept or reject the person. If both are better off in their own self-interest by switching, this has eliminated an unstable match from the process. The algorithm to match men and women exclusively and heterosexually in marriage specifically favors men – they act in order of preference when women do not. This process goes on for n2 times at worst, which is a fairly long run time. Since the algorithm will not terminate until all are matched and it checks at each point for stability, the algorithm accomplishes what it sets out to do by matching a stable set of couples. In the long run, all orders of running of the algorithm will yield the same stable matching. (7/10)

2.1: Computational Tractability

Studying algorithms helps introduce students to the structure of computational theory; they are effectively discrete, at most countably infinite, since computers may not truly evaluate infinite data sets. This would require an infinite amount of memory with which the computer could store data to analyze its implications. For this reason, efficient algorithms are a cornerstone of effective computation: without an efficient algorithm, a computer will waste time and memory resources that could be used for other problems. Efficiency is defined by how poorly a case may go. Though an average-case may be more effective in evaluation of runtimes and memory use, worst-case scenarios are helpful in avoiding a brute-force method of computation. To find a case with lower run time at worst is often a better choice. “Proposed definition of efficiency (3): An algorithm is efficient if it has a polynomial running time.” This still does not assure that the most efficient algorithm will run better in every practical case. “There is no efficient algorithm for a particular problem.” My question here is how no algorithm may be especially efficient – does a lack of proof suggest this conclusion, or is it purely subjective? (9/10)

2.2: Asymptotic Order of Growth Algorithms are mainly expressed in terms of pseudo-code and best/worst possible running times. The growth rate of running times with the addition of inputs is important because over time, an algorithm that significantly slows with inputs will not effectively use computer space. For this reason, with x being a mathematical expression denoting running time, O(x) is the worst case running time, Omega(x) is a best case running time, and if they are equal, they are equal to Theta(x), which is the tight bound that shows the actual running time of an algorithm. This is because all running times will be less than or equal to O(x) and greater than or equal to Omega(x). Due to the inequality, if they meet in the middle, the running time must be the point at which they meet. These tight bounds may yield from limits as n goes toward infinity, or they may be calculated from ratios with other functions whose tight bounds are known. For polynomial-time algorithms, the order is of a constant d such that O(nd) is the asymptotic running time regardless of the size of the inputs. The same logic applies to logarithms with O(nx¬¬) and exponential algorithms with O(rn). (8/10)

courses/cs211/winter2014/journals/annebailey/home.1389798719.txt.gz · Last modified: 2014/01/15 15:11 by admin
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0