This is an old revision of the document!
Table of Contents
Greedy Algorithms
Greedy algorithms essentially use a localized decision making process to yield a globally optimal solution. Fortunately, the ability of a greedy algorithm to solve a nontrivial usually reveals a great deal about the structure and nature of the problem itself. In the first two sections of the chapter we will work through two example problems to illustrate two typical methods of proving the greedy algorithm indeed yields an optimal solution:
- greedy stays ahead (and structural)
- exchange argument.
4.1: Interval Scheduling: Greedy Stays Ahead
Interval Scheduling Problem
The Interval Scheduling Problem consists of a set of n requests for use of a single resource, each ith request corresponding to an interval of time starting at s(i) and ending at t(i). Essentially, we want to get the maximum number of “jobs” scheduled where none of them overlap. The efficient method of this is a greedy algorithm.
In this particular case, the book (and in class, we) discussed that the greedy rule should be based on accepting the valid (meaning does not overlap with the jobs already set) job which finishes first–the job which minimizes f(i). This maximizes the open time after adding a single job. So, we let R denote the list of jobs we haven't processed yet, and A be the set of jobs we've accepted in building our final schedule. The algorithm (as discussed in class, because the book's version is far too removed from actual implementation) is thus as follows:
- Sort jobs in R by finish times so that f(1)=<f(2)=<…=<f(n)
- A = {}
- for j=1 to n
- if job j compatible with A
- add j to A
- return A
Analyzing the Algorithm
Firstly, its clear that A is indeed a compatible set of jobs. Next, we must show that A is optimal. Suppose that O is an optimal set of intervals, thus we must show that |A|=|O|. Let ir and jr be the rth job in each final set.
Note then that our greedy rule guarantees that f(i1)=<f(j1), because the greedy rule automatically pick the one with the lowest end time. Then, through induction we can prove that
- for all indices r=<k we have that f(ir)=<f(jr).
Then, if A is not optimal, then O would have to have more jobs. But, we know that up to equal number of jobs, the end time of the greedy solution's last job is less than the end time of that same numbered job in the optimal, meaning that the additional one in the optimal would have been grabbed by the greedy algorithm too–a contradiction.
So, we know A returned by the greedy algorithm is indeed optimal.
Runtime
Sorting the jobs takes O(n logn) time as we know. Then, the comparisons and adding each take simply constant time because we can maintain the value of the end time of G and simply compare the start time of the next jobs. So, the for loop is O(n), and our whole algorithm runs in O(n logn).
Interval Partitioning Problem
A variation on the problem we just did is the interval partitioning problem. The easiest way to think of this is a set of class lectures that each occur at specific time intervals, and we want an algorithm to return a schedule that minimizes the number of classrooms required.
We define the depth of the set of intervals to be the maximum number that overlap at a given time. Clearly, the number of classrooms needed is at least the depth of the set. We proceed to give an algorithm which schedules all lectures in a number of classrooms equal to the depth.
Suppose we are given a set of intervals. Let d be the depth of the set. We will work to assign each interval (class lecture) a label from 1 to d, which we can interpret as the classroom designated for that lecture. The following algorithm begins by sorting the intervals according to start times, and then we assign the first a label (1), and then iterate through the sorted intervals, adding them to the same classroom if they fit, and if not, adding them to the next highest classroom label. The algorithm is as follows:
- Sort intervals by start times so that s(1)=<s(2)=<…=<s(n)
- d = 0 (number of allocated classrooms)
- for j = 1 to n
- if (lecture j is compatible with some classroom k)
- schedule lecture j in classroom k
- else
- allocate a new classroom d+1
- schedule lecture j in classroom d+1
- d = d+1
Analyzing Algorithm
We see that every interval will be assigned a label and no two overlapping intervals will receive same label. We know we will never reach a point where all labels are currently in use due to the structural argument that the depth is indeed maximum. So, we will never be triggered to use a depth+1 level classroom. Hence, our output schedule is indeed optimal.
Runtime
We know that this runs O(n logn).
Personally, I think they got hung up between the generality of the problem and the examples of classes/jobs. The generality they discussed (classrooms as “resources”, etc.) did not help in the ways that mathematical formalism does, and then they also got caught between the two quite often. I'm leaning toward a 7/10.
4.2: Scheduling to Minimize Lateness: Exchange Argument
In this problem, we are given a set of tasks, each which takes a set time to complete (t(j)), and has a set deadline (d(j)). We proceed to give an algorithm which will minimize the maximum lateness among all the tasks (again assuming none can overlap):
- Sort n jobs by deadline so that d(1)=<d(2)=<…=<d(n)
- t = 0
- for j=1 to n
- assign job j to interval [t, t+t(j)]
- s(j)=t
- f(j)=t+t(j)
- t=t+t(j)
- output intervals [s(j),f(j)]
Analyzing Algorithm
We can make a number of observations. First, there is of course an optimal schedule with no idle time (time in which no job is being done). Defining an inversion to be jobs i and j such that i is scheduled before j but d(j)<d(i), we note that our algorithm automatically produces a schedule without inversions. Additionally, we know that all schedules with no inversions and no idle time have the same maximum lateness–because it then would only depend on swapping of adjacent jobs that have the same deadline.
Now, consider an optimal schedule O, with no idle time. We know that if O has an inversion, then there is a pair of jobs i and j such that j is immediately after i and has d(j)<d(i). This means that if there is an inversion somewhere, there must be an adjacent inversion somewhere.
Suppose O has at least one inversion, and suppose i and j are an adjacent inversion as mentioned above. Then we know that upon swapping them, no new inversion is created, but we eliminate one. So we swap them and get a schedule with one less inversion.
Now, a more nuanced proof based on inequalities illustrates that swapping any given inverted pair will not increase maximum lateness.
Therefore, we can continue the process of doing so from any arbitrary O, and end up at the schedule (or an equivalent, if there are equal end times) produced by our greedy algorithm–all the while never increasing the maximum lateness. Hence, our output is indeed optimal.
This was an enjoyable section, I think it was very well explained and complemented the class material perfectly. 9/10.
4.4: Shortest Paths in a Graph
Consider a weighted directed graph G=(V,E). We proceed to give an algorithm which will determine the shortest (lowest cost) path from any source node s to each other node. We assume that a path does indeed exist from s to all others. We will call le the length of edge e. Then the length of any path is the sum of those edge lengths in it.
We maintain a set S of vertices u for which we have determined a shortest path distance d(u) from s–the “explored” component.
Dijkstra's Algorithm (G, //l//)
- Let S be the set of explored nodes
- For each u in S, store distance d(u)
- initialize S={s} and d(s)=0
- While S != V (not all nodes)
- Select a node v not in S with at least one edge from S for which d'(v)=mine=(u,v):u in Sd(u) + le is as small as possible
- Add v to S and define d(v)=d'(v)
Analyzing Algorithm
We note that at any point in the algorithm's execution, for any node u in S, the length of the path from s to u given is indeed a shortest path possible. This immediately gives us algorithm correctness because when it finishes, everything will be in S.
Note that this only works under the assumption of no negative lengths, and additionally that it's in essence a continuous breadth-first search.
Implementation with PQ and runtime
Using a priority queue to store unexplored nodes, with their priority being their current value d(j), we can get the runtime of this algorithm down to O(m logn). Essentially, we use the O(logn) time process of updating the priorities in the PQ as we update the values of d(j) (which we do with each edge, so m times), and then the remove function is also O(logn) which we do n times. But we know that m is greater than n for anything but a straight line graph, so that term dominates. So, with a heap-based PQ implementation, we yield the runtime of O(m logn).
This was yet another good section and went well with the classwork. However, I appreciated the method we went through it in class a bit more, combining the PQ implementation into our main workthrough of the algorithm. 8/10.
4.5: The Minimum Spanning Tree Problem
Given an undirected weighted graph G, we aim to create a network of edges within G such that it connects all the nodes of G but the edges sum to the minimum cost possible. First, note that the minimum cost solution to this problem is a tree. So, we proceed to define three algorithms which solve this problem using different routes:
- Prim's Algorithm: we essentially run Dijkstra's algorithm, maintaining a partial tree (our “explored” set from Dijkstra's) as we build greedily outward by always adding the node that costs the least.
- Kruskal's Algorithm: builds the spanning tree by successively inserting edges. We consider the edges in order of ascending cost, and if adding it would create a cycle in our tree (thus making it not a tree), then we discard/skip it. If it would not create a cycle, then we add it and go to the next.
- Reverse-Delete Algorithm: essentially Kruskal's, but backward. We consider the edges in descending order of cost, and instead of starting with an empty tree and building, we start with the whole graph and successively delete. So, when considering an edge, if its deletion would lead to some node or nodes becoming disconnected, then we do not delete it and move on to the next.
Analyzing Algorithms
We can first analyze these algorithms by laying out a few properties of when we know that an edge will be in a minimum spanning tree (MST) and when we know it will not be. We assume that all edge costs are distinct.
- Cut Property–when we know that every MST contains edge e. Let S be any subset of nodes that is neither empty nor equal to all of V (the set of all our nodes in G), and let edge e=(v,w) be the minimum cost edge with one end in S and the other in V-S. Then every MST contains e.
- This property very easily extends to proving that both Kruskal's and Prim's Algorithms produce a minimum spanning tree. This is because each essentially build outward in expanding connected sets until they span all the nodes, although they differ in the directions they take.
- Cycle Property–when we know that no MST contains edge e. Let C be any cycle in G, and let edge e=(v,w) be the most expensive edge belonging to C. Then e does not belong to any minimum spanning tree.
- This property extends to prove that the Reverse-Delete Algorithm produces an MST, because we essentially keep deleting the edges that are part of cycles that are most expensive until we are left with an MST (no cycles, tree, touches all nodes).
It is a short extension to eliminate the assumption of all distinct weights. The proof utilizes small perturbations to make any equal edges actually unequal, but qualitatively we can understand these as simple tiebreakers, and realize that the net cost of the spanning tree will not change based on the choice between tied cost edges in the algorithms.
Implementing Prim's Algorithm
Prim's Algorithm can be implemented in essentially the exact same manner as Dijkstra's (above). Therefore, with a priority queue, the algorithm runs in O(m logn) time.
4.6: Implementing Kruskal's Algorithm: the Union-Find Data Structure
As Kruskal's Algorithm works forward, it adds edges to our solution that do not necessarily reach out of a single connected set. In fact, it slowly combines connected subsets until they become one large connected set: our MST. So, given this problem, we create a boutique data structure–the Union-Find data structure–to suit our exact needs.
This structure will support the following operations:
- Find(u) will return the name of the set containing node u
- Union(A,B) will merge two sets A and B into a single, new set.
So, when considering an edge (u,v), the Find operation can be used to test if the nodes are already in the same set (connected component), and if they are not, then we add the edge to our MST-in-progress, and perform Union(Find(u),Find(v)) to merge their connected components into a single new one.
We aim to implement the above operations in O(logn) time. Additionally, we must include the MakeUnionFind(V) operation which takes our set of set of vertices and initializes a Union-Find structure where each is in a separate connected component set on its own. We aim to make this operation O(n).
Using a pointer-based system, we initialize a record for each node with a null pointer. This does MakeUnionFind in O(n) time. The sets will all have names corresponding to a single node in them, and we can easily maintain in constant time a value telling us the size of the set. Then, when performing Union, we simply update the pointer from the namesake node of the smaller set to the namesake of the larger (or if equal, it doesn't matter). This takes O(1) time. Then, when calling Find on a given node, we simply follow the pointer trail upward until we reach the namesake node. Because of the name convention keeping the larger set's namesake, the sets size can at most double logn times, and therefore there can be at most logn name changes (pointers to follow before reaching the namesake). Therefore, Find is run in O(log n) time.
This can actually be further refined by collapsing the pointer system as we trace upwards from a node when calling Find: adjust the pointers of every node we pass to point directly to the namesake. This therefore reduces the time of a sequence of n Find calls to O(n alpha(n)) where alpha(n) is the inverse Ackermann function–a function that is extremely slow growing and less than or equal to 4 for any value of n we could encounter in practice.
Regardless, we hit the bottleneck of sorting the m edges: O(m log m) time, but more succinctly put, because m =< n2: O(m logn). We do a total of at most 2m Find operations, and n-1 Union ops while running Kruskals, so the sorting time dominates and the final time is O(m logn).
