This is an old revision of the document!


Onye's Journal

Preface:

Algorithms are important. They help us solve problems. They help us understand stuff. This book is about algorithms.

Section 1.1:

Section 1.1 talked about the stable matching problem and the gale shapely soluition to it. A typical example of the stable matching problem is n men and n women looking to be matched, with each man and woman having a strict order of preferences. For a given matching, m and w form an unstable pair if they each prefer each other to their current partner. For n men and n women with strict preferences, it is always possible to find a matching with no unstable pairs, and the gale shapely algorithm does just that. This works by taking any single man and attempting to pair him the his highest rated woman that we have not attempted to pair him with yet. If she prefers him to her current match, the pairing is successful, and she is paired with him and her current partner becomes single again. Repeat this until every man has proposed to every women.

Section 2.1:

Chapter 2 discusses the computational tractability and the running time of algorithms. It also discusses asymptotic order of growth and Big O. Algorithms that are polynomial time are O(n^k) for some k. These are said to be efficient. As opposed to that, some algorithms may be O(k^n), for some k > 1, for example. These algorithms are incredibly computationally intensive and quickly become impossible to calculate in a reasonable amount of time. For example, an exponential algorithm may run in under a second for n = 1, but take billions of years for n = 20.

Section 2.3

In order to get the Gale-Shapely algorithm running in O(n^2) steps we need to get each step of the loop running in constant time.

Because of we need to both add and remove elements from the set of free men it is preferable to use a list to an array. (Though, it is entirely feasible for this problem to use arrays correctly without increasing the complexity. You can, for example, implement a stack with an array and since the maximum number of men is fixed there will be no fear of running out of space in the array so all stack actions will be constant time).

To keep track of the woman a man will next propose to you can use an array called Next where Next[m] is the number of the woman man m will have to propose to next. Of course, alternatively you could have kept the men's preference list as a list and deleted woman as he proposed to them.

Finally, we need an array Ranking[w,m] that gives the ranking of man m in w's list. This allows us to compare 2 men to see which one she prefers easily. This should be done outside of the loop.

Section 2.4

This shows some running times and how some examples of algorithms that run in that time with explanations for why that is the case.

Linear Time: 1. An Algorithm can be linear if it processes the input as a single pass, spending a constant amount of time on each part of the input. An example of this is the most direct algorithm for computing the maximum of a list of numbers.

Another mehtod of looking at sort of accouting scheme. Charge the cost of each iteration to an element so the total cost will be bounded by the number of elements. An example where this approach works is analazying the running time of merging two sorted lists.

N log N:

N log N is common to “divide and conqure” strategy algorithms where the input is cut into pieces, each piece is analyzed separately and then it's all put together again. For example, sorting algorithms such as mergesort or quicksort. It is also appears as the running time where sorting is the most expensive component of the algorithm .

Quadratic Time: Quadratic running time is appears when processing is done on every possible pair of a set of n elements. For example, comparing the distance between every two points in a set to find the minimum. It can also arise from nested loops.

Cubic Time:

From triply nested loops. Or perhaps from selecting a pair and then performing a loop.

O(n^k) Time:

From k nested loops. Or from selecting a subset of k elements.

Beyond Polynomial Time: Some algorithms take even more than polynomial time. Some examples of running times like this are O(2^n) and O(n!). O(2^n) might arise in cases where we look at every possible subset of n elements. O(n!) might arise when looking at all the possible permutations of n items or matching n items to n other items.

Sublinear Time: Sometimes algorithms take less than linear time. Since just reading the input is linear time, these algorithms often work by having the input queried indirectly rather than read completely. An example of such an algorithim is the binary search algorithm for

Section 2.5

Using a heap which is a tree where each node has a key and the key of a nodes children are not smaller than the key of that node. Using operations HeapifyUp and HeapifyDown, which fix the heap if it is messed up at one particular location it is possible ot implement insertion and deletion in logarithmic time. The heap is a natural choice for implementing a priority queue because the highest priority element is naturally at the top, and it works better than other implementations which require at least one operation to run in O(n).

Section 3.4

Breadth first search can be used to test biparteness. A graph is bipartite if and only if there are no odd cycles. Color vertices based on the parity of the distance between them and an arbitrary starting node.

Section 3.5

A directed graph is a graph where edges are directed arrows pointing from one vertex to another. The concept of connected component changes somewhat on directed graphs since just because there is a path from vertex a to vertex b doesn't mean there is one from b to a. A graph is strongly connected every vertex is reachable from every other vertex.

Section 3.6

Directed Acyclic Graphs (DAGs) are directed graphs that are acyclic. Any DAG has at least one toplogical ordering, an ordering of the vertices such that if a has an arrow going to b then a comes before b. To find such an ordering just recursively build it by taking out the “least” element in the DAG, that is one with no arrow pointing to it. You will always find one of these if you try. You just have to believe.

Section 4.2

Given a list of tasks with the time it takes to do the task and the deadline of the task the problem is to find a scheduling of the tasks to minimize the maximum lateness. Some incorrect approaches:

1. Do the shortest length tasks first. If a short task has a far deadline and a long task has a close deadline, then this will fail. 2. Define slack time as the difference between the task length deadline. Try to do tasks in order of increasing slack time. This is wrong because if something has a small slack time but a far away deadline, performing that before something with a slightly larger slack time but a much sooner deadline will result in a muc greater amount of lateness.

Section 4.4

Djikstra's algorithm is one that can find the shortest path from a start node s to every other node on a weighted directed graph, so long as all the weights are nonnegative. It works by maintaining a set of explored nodes, for which the shortest path to is known, and a set of unexplored nodes. At each step, the edge of smallest length moving out from the explored set to the unexplored set can be used to extend the explored set, obviously. Using a priority queue Djikstra's algorithm can be implemented in O(m log n) time

Section 4.5

The minimum spanning tree problem is to take a graph of weighted edges and to find a subgraph that is a tree, connects all edges, and has the minimum total weight. There 3 obvious algorithms:

Kruskal's Algorithm: Start with no edges. Move through the set of all edges in order of increasing weight, adding each edge so long as it does not form a cycle

Prim's Algorithm: Similar to djikstra's algorithm. Start with the root node by itself and simply add extending from explored nodes of smallest weight

Reverse-Delte Algorithm: Similar to Kruskal's but backwards. Start with all edges, delete the weightiest unless it would make the graph disconnected.

To see that these algorithms work, obsrve that if S is any nonempty proper subset of the set of vertices, the minimum cost edge extending from that set to a vertex not in that set must be included in the minimum spanning tree. If you have a spanning tree where it isn't used, it can replace any edge extending from S to a vertex out of S for a minimaler spanning tree.

courses/cs211/winter2014/journals/onye/home.1393460395.txt.gz · Last modified: 2014/02/27 00:19 by ekentao
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0