This is an old revision of the document!


Anne Bailey's Journal

Preface:

Algorithms unravel the secrets of every science. They problem solve through mathematical cores and identify the nature of the problem they are solving. The point of algorithms are to both provide solutions and to construct the framework underlying the problems they are used to evaluate. We see this framework while trying to develop the most efficient possible algorithms. The point of studying algorithms is to better understand how to efficiently answer questions as they are posed. I would consider this preface a 3/10 in terms of how interesting I find it. (3/10)

1.1: Stable Matching

A self-enforcing match is one in which agents acting in their own self interest could not break a match for mutual gain on each side. The idea developed by Gale and Shapely behind this algorithm is that individual self-interest will reinforce an algorithm to assure that any matches made will not be broken after all assignment have been doled out. It is easiest to approach a one-to-one match so that applicants and acceptances are equal and mutual. Though matching is just assigning a single set of outcomes, perfect matching means that we follow this one-to-one rule. Preferences are the basis of self-interest that influences stability; without preferences no one would care with whom they were matched and matching could be a rolling process in pursuit of convenience and efficiency alone. To match perfectly, we must start trying to match persons and giving every person the chance to break their initial agreement in favor of another person – who may either accept or reject the person. If both are better off in their own self-interest by switching, this has eliminated an unstable match from the process. The algorithm to match men and women exclusively and heterosexually in marriage specifically favors men – they act in order of preference when women do not. This process goes on for n2 times at worst, which is a fairly long run time. Since the algorithm will not terminate until all are matched and it checks at each point for stability, the algorithm accomplishes what it sets out to do by matching a stable set of couples. In the long run, all orders of running of the algorithm will yield the same stable matching. (7/10)

2.1: Computational Tractability

Studying algorithms helps introduce students to the structure of computational theory; they are effectively discrete, at most countably infinite, since computers may not truly evaluate infinite data sets. This would require an infinite amount of memory with which the computer could store data to analyze its implications. For this reason, efficient algorithms are a cornerstone of effective computation: without an efficient algorithm, a computer will waste time and memory resources that could be used for other problems. Efficiency is defined by how poorly a case may go. Though an average-case may be more effective in evaluation of runtimes and memory use, worst-case scenarios are helpful in avoiding a brute-force method of computation. To find a case with lower run time at worst is often a better choice. “Proposed definition of efficiency (3): An algorithm is efficient if it has a polynomial running time.” This still does not assure that the most efficient algorithm will run better in every practical case. “There is no efficient algorithm for a particular problem.” My question here is how no algorithm may be especially efficient – does a lack of proof suggest this conclusion, or is it purely subjective? (9/10)

2.2: Asymptotic Order of Growth

Algorithms are mainly expressed in terms of pseudo-code and best/worst possible running times. The growth rate of running times with the addition of inputs is important because over time, an algorithm that significantly slows with inputs will not effectively use computer space. For this reason, with x being a mathematical expression denoting running time, O(x) is the worst case running time, Omega(x) is a best case running time, and if they are equal, they are equal to Theta(x), which is the tight bound that shows the actual running time of an algorithm. This is because all running times will be less than or equal to O(x) and greater than or equal to Omega(x). Due to the inequality, if they meet in the middle, the running time must be the point at which they meet. These tight bounds may yield from limits as n goes toward infinity, or they may be calculated from ratios with other functions whose tight bounds are known. For polynomial-time algorithms, the order is of a constant d such that O(nd) is the asymptotic running time regardless of the size of the inputs. The same logic applies to logarithms with O(nx¬¬) and exponential algorithms with O(rn). (8/10)

2.3: Implementing the Stable Matching Algorithm Using Lists and Arrays

We use algorithms to look at specific problems in which we use different data structures. Our prior Stable Matching algorithm ran, at most, with O(n^2). We can achieve this using lists and arrays. Choosing a data structure is, however, subjective. We can weigh the running time trade-offs in order to select the best data structure for the problem at hand. Arrays are not great for keeping lists that change constantly since they cannot grow and the nodes are stuck in a specific placement in the list. Linked lists are better for lists that change frequently since it is possible to insert and delete elements in a sorted order with shorter running times. We must iterate through a list until the ith element to access it, though, so this is one such trade off of using a linked list. Access of the ith element of an array is O(1), whereas this list implementation is O(i); this is an illustration of one of the running time trade offs. To translate between the two forms, it takes O(n) running time. To use arrays and linked lists to implement Stable Matching will should use arrays for preference lists, indices to refer to each man and woman. In this style, we should assign a specific data structure to every pertinent piece of information. Whichever structure requires the highest runtime to access the data will combine with the amount of run-throughs to determine the worst-case running time. These steps will combine to make the G-S algorithm run in O(n^2) at worst with implementations of arrays and doubly linked lists. (4/10 – We already knew this – No questions)

2.4: A Survey of Common Running Times

To build an algorithm, we should know what running time the algorithm will require, both by nature of the amount of information, and by how long it will take to use this information to attain an answer. O(n) is normally the size of the input times constant access, so it is likely that this type of algorithm will have to look through all data once. This can occur in a maximum-finding program that goes through elements one by one and records the so-far maximum at every data point. This order also applies to merging two sorted lists since it involves simply going through each list and adding the next smallest item until each list is empty. O(n log n) applies to the algorithms that split inputs in half, solves each recursively, and outputs a combination of the two solution with O(n) time. Sorting can be solved this way (Mergesort) by dividing inputs into two pieces, solving, and then merging the two sorted lists. This will also be common where an algorithm’s longest step is sorting. O(n^2) will apply to most algorithms where you must run through n inputs n times. It seems to be a brute force response to algorithms that comes from nested loops. Cubic time will describe a triple nested loop of any type where there are three data sets to compare. O(n^k) follows the above pattern to describe any search k times over n amounts of items. It is not a quick running time, but it can be necessary in a brute force approach to algorithm solving. Above and beyond polynomial time will often be O(2^n) or O(n!), which will come from comparing subsets of different sizes. n! grows more quickly than 2^n, but both grow so quickly that there will often be a better running time solution available. Finally, some cases will be smaller than linear but larger than constant. For example, binary searches run in this method.

My question about this section is how we can know with certainty that a proposed solution to an algorithm is efficient. The section provided a wealth of examples without sketches, and it felt like a bit much information in one place. (7/10)

2.5: A More Complex Data Structure: Priority Queues

Our approach to algorithms is to find a brute-force solution, then to improve upon it as best possible. A good goal is polynomial time, or better. The algorithm to evaluate in this section is the priority queue implementation. For this, we have a priority value (key), and we always want to select the highest priority from the list. Things won’t come in in order of priority, so it must be that processes are assigned a position in a data structure at the addition independently of the priority it has been assigned. “O(n) priority queue operations can be used to sort a set of n numbers” since extracting the smallest number may be executed n times with O(n). Therefore, if items can be put in and taken out with O(logn), then it will sort n numbers with O(nlogn) running time. The data structure that will yield this result is a heap, since it can be implemented as a balanced binary tree of maximum height logn. “Heap order: For every element v, at a node I, the element w at I’s parent satisfies key(w)⇐key(v).” An array of n nodes can maintain this heap with parent nodes at i, and children at 2i and 2i+1. When we add items or move items in the heap, we use heapify-up and heapify-down to move around the items and make sure that they are in the proper nodes such that their priorities are correct in relation to the other nodes’ keys. A table of running times for implementing priority queues with heaps is available on page 64. (7/10 – No questions since we already studied this in class)

courses/cs211/winter2014/journals/annebailey/home.1390281985.txt.gz · Last modified: 2014/01/21 05:26 by dickensa
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0