Section 3.6 - Directred Acyclic Graphs and Topological Ordering. In the last section of Chapter 3, the book discusses DAGs and Topological Ordering. By definition, a directed graph that does not have any cycles in a directed acyclic graph. DAGs are used to solve problems of precedence relations or dependencies. By having a set of tasks labeled 1 through n that include dependencies such as a pair of i and j shows that i must be performed before j. We represent this graph by setting each task to a node, and a directed edge (i,j) whenever i must be done before j. Next the section discusses how a DAG (G) has a topological order of its nodes as v1, v2, ..., vn so that for every edge (vi, vj), we have i= f(i) (this takes O(n)). Therefore the algorithm ultimately has an efficiency of O(nlogn). Lastly the section extends to another interval probelm referred to as teh Interval Partioning Problem. This problem says that all intervals must be schedule, thus any overlapping tasks will be partitioned to a new resource (such as a classroom). The section proves that the minimum number resources required for an IPP is equal to its depth, or every interval will be assigned a label, with no two overlapping intervals receiving the same label, and no intervals end up unlabed. Lastly, the greedy algorithm above schedule every interval on a resoure using a number of resources equal to the depth, and this is teh optimal number of resources needed. 4.2 - Scheduling to Minimize Lateness: An Exchange Argument In this section we discuss problems minimizing the lateness. Now each request is more flexible, but now a request includes a start time and a deadline, instead of a start time and a finish time. We say that a request i is late if f(i) > di, and its lateness (li) = f(i)-di. To design an algorithm to minimize the maximum lateness the book looks at many options, but simply scheduling the jobs in ascending order of their deadlines provides the optimal solution. Initially the book proves that this is optimal by proving that there is an optimal solution with no idle time, such as a solution that progresses by its deadlines. Next it proves that all schedules with no inversions and no idle time have the same maximum lateness, where an inversion A' occurs when job i is schedule before job j when dj < di. Lastly the section uses the exchange argument to generate the next conclusion the that there is an optimal solution tha thas no inversions and no idle time" and that solution is formed through this greedy algorithm. 4.4 - Shortest Paths in a Graph This section looks at the construction of minimum-cost spanning trees to find the shortest path between a start and end node. For a path P, the length of P or l(P) is the sum of the lengths of all edges in P. We can find this path P, by using Kijkstra's algorithm that was proposed in 1959. The algorithm maintains a set S of vertices u for which there is a determined shorest-path distance d(u) from s; this is the "explored" part of the graph. Initially S = {s}, and d(s) = 0. Now, for each node v (element of V-S) we determine the shorest path that can be constructed by traveling along a path through the explored part S to some u (element of S). The algorithm chooses teh node v (element of V-S) for which the quantity is minimized, add v to S and define d(v) to be the value d'(v). As each node v is added to the set S, the algorithm records the edge (u,v) on which it achieved the value min(d(u)+l), meaning that the next edge recorded is the minimum path between u and the new v and the path between u, v old, and v new. This algorithm is proven to be optimal through induction as long as all edges at positive. The implementation of this algorithm uses a priority queue that also maintains the functions "ChangeKey" and "ExtractMin" to optimize its solution. Therefore by using a priority queue, Dijkstra's Algorithm can be implmented on a graph with n nodes and m edges to run in O(m) time, plus the time for n "ExtractMin" and m "ChangeKey" operations. This ultimately equals a runtime of O(mlogn).