Section 3.6 - Directred Acyclic Graphs and Topological Ordering. In the last section of Chapter 3, the book discusses DAGs and Topological Ordering. By definition, a directed graph that does not have any cycles in a directed acyclic graph. DAGs are used to solve problems of precedence relations or dependencies. By having a set of tasks labeled 1 through n that include dependencies such as a pair of i and j shows that i must be performed before j. We represent this graph by setting each task to a node, and a directed edge (i,j) whenever i must be done before j. Next the section discusses how a DAG (G) has a topological order of its nodes as v1, v2, …, vn so that for every edge (vi, vj), we have i<j, or that every edge of the graph “point foward”. The book next proves that a graph is a DAG if it has a topological order through contradiciton. By giving graph G, which has a cycle, a topological order, we find a contradiction because no order can be established with a cycle in place. Therefore, in every DAG G, there is a node v with no incoming edges, which is then set to the start of the topological order. Lastly, the section proves that if a graph G is a DAG, then G has a topological ordering. This is proved by recursively deleting the first node (node s) that has no incoming edges from the graph and placing them in order of their deletion. After deleting s, you also delete any edge another node has from s that is still in the graph. The algorithm to find this order has a run time of O(m+n)

Chapter 4 - Greedy Algorithms. Chapter 4 introduces the idea that an algorithm is greedy if it builds up a solution in small steps, choosing a decision at each step to optimize some underlying criterion. Therefore, greedy alforithms can produce solutions that are guaranteed to be “close” to optimal. The next two sections discuss how two basic methods prove that a greedy algorithm produces an optimaly solution to a problem. First the method is to establish that a greedy algorithm “stays ahead”. the Second method is an exchange argument such that one considerrs any possible solution to the problem and gradually transforms it into the solution found by the greedy algorithm.

Section 4.1 - Interval Scheduling: The Greedy Algorithm Stays Ahead. The interval scheduling problem says that a subset of the requests is compatible if no two of them overlap in time, and our goal is to accept as large as comptible subset as possible. Next, the section discusses possible designs for a greedy algorithm to solve this. It includes many examples but ultimately it proves that the optimal solution can be found by choosing the task (i)that finishes first, that is: request i has the smallest f(i) possible, then by deleting this task from the set we can find the next task (j) with the smallest f(j) and is compatible with i. The book proves that A (set of compatible requests) has the same length as the optimal set (O) by showing that A always “stays ahead” of O. This is specifically proven through an induction proof showing that with each request r in A, the rth interval it selects finishes at least as soon as the rth interval of O. This algorithm is implmented by initially sorting the n requests in order of finish times (this takes O(nlogn) time), then construct an array S[1,…,n] with teh property that S[i] contains teh value s(i) (this takes O(n) time). Next we select the first interval, and then iterate through the intervals in order until reachign the first interval j for which s(j) >= f(i) (this takes O(n)). Therefore the algorithm ultimately has an efficiency of O(nlogn). Lastly the section extends to another interval probelm referred to as teh Interval Partioning Problem. This problem says that all intervals must be schedule, thus any overlapping tasks will be partitioned to a new resource (such as a classroom). The section proves that the minimum number resources required for an IPP is equal to its depth, or every interval will be assigned a label, with no two overlapping intervals receiving the same label, and no intervals end up unlabed. Lastly, the greedy algorithm above schedule every interval on a resoure using a number of resources equal to the depth, and this is teh optimal number of resources needed.

4.2 - Scheduling to Minimize Lateness: An Exchange Argument In this section we discuss problems minimizing the lateness. Now each request is more flexible, but now a request includes a start time and a deadline, instead of a start time and a finish time. We say that a request i is late if f(i) > di, and its lateness (li) = f(i)-di. To design an algorithm to minimize the maximum lateness the book looks at many options, but simply scheduling the jobs in ascending order of their deadlines provides the optimal solution. Initially the book proves that this is optimal by proving that there is an optimal solution with no idle time, such as a solution that progresses by its deadlines. Next it proves that all schedules with no inversions and no idle time have the same maximum lateness, where an inversion A' occurs when job i is schedule before job j when dj < di. Lastly the section uses the exchange argument to generate the next conclusion the that there is an optimal solution tha thas no inversions and no idle time“ and that solution is formed through this greedy algorithm.

4.4 - Shortest Paths in a Graph This section looks at the construction of minimum-cost spanning trees to find the shortest path between a start and end node. For a path P, the length of P or l(P) is the sum of the lengths of all edges in P. We can find this path P, by using Kijkstra's algorithm that was proposed in 1959. The algorithm maintains a set S of vertices u for which there is a determined shorest-path distance d(u) from s; this is the “explored” part of the graph. Initially S = {s}, and d(s) = 0. Now, for each node v (element of V-S) we determine the shorest path that can be constructed by traveling along a path through the explored part S to some u (element of S). The algorithm chooses teh node v (element of V-S) for which the quantity is minimized, add v to S and define d(v) to be the value d'(v). As each node v is added to the set S, the algorithm records the edge (u,v) on which it achieved the value min(d(u)+l), meaning that the next edge recorded is the minimum path between u and the new v and the path between u, v old, and v new. This algorithm is proven to be optimal through induction as long as all edges at positive. The implementation of this algorithm uses a priority queue that also maintains the functions “ChangeKey” and “ExtractMin” to optimize its solution. Therefore by using a priority queue, Dijkstra's Algorithm can be implmented on a graph with n nodes and m edges to run in O(m) time, plus the time for n “ExtractMin” and m “ChangeKey” operations. This ultimately equals a runtime of O(mlogn).

courses/cs211/winter2011/journals/ashley/chapter_4.txt · Last modified: 2011/02/16 06:39 by leinwebera
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0