This is an old revision of the document!
Table of Contents
Introduction: Greedy Algorithms
An algorithm is said to be greedy is it builds up a solution in small steps, choosing a decision at each step myopically to optimize some underlying criterion. When a greedy algorithm succeeds in solving a nontrivial problem optimally, it typically implies something interesting and useful about the structure of the problem itself. There are two basic methods for proving that a greedy algorithm produces an optimal solution to a problem: greedy stays ahead and the exchange argument. Applications of greedy algorithms include shortest paths in a graph, the Minimum Spanning Tree Problem, the construction of Huffman codes for performing data compression, and the Minimum Cost Arbolescence Problem, a more complex application.
Section 4.1: Interval Scheduling: The Greedy Algorithm Stays Ahead
Set Up: Consider a set of requests {1, 2, …, n}; the ith request corresponding to an interval of time starting at s(i) ad finishing at f(i). A subset of the requests is compatible if no two of them overlap in time, and the goal is to accept as large a compatible subset as possible. The compatible sets of maximum size will be called optimal.
Designing a Greedy Algorithm
The basic idea in a greedy algorithm for interval scheduling is to use a simple rule to select a first request il. Once a request il is accepted, we reject all requests that are not compatible with il. We then select the next request i2 to be accepted, and again reject all requests that are not compatible with i2. We continue in this fashion until we run out of requests.
Here is the process we go through to find the most optimal ordering:
1. Select the available request that starts earliest –> Not an optimal solution
2. Accept the request that requires the smallest interval of time, or the request for which f(i) - s(i) is as small as possible –> Not an optimal solution, but somewhat better than 1.
3. For each request, count the number of other requests that are not compatible, and accept the request that has the fewest number of non compatible requests –> Solution might be optimal
4. Accept first the request that finishes first, or the request i for which f(i) is as small as possible –> This leads to an optimal solution. The general algorithm is shown below:
Initially let R be the set of all requests, and let A be empty
While R is not yet empty
Choose a request i ∈ R that has the smallest finishing time
Add request i to A
Delete all requests from R that are not compatible with request i
EndWhile
Return the set A as the set of accepted requests
Analyzing the Algorithm
For the above algorithm we can prove optimality by proving |A| = |O| where A is a compatible set of requests and O is an optimal set of intervals. This shows that A contains the same number of intervals as O and hence is also an optimal solution.
Theorem 4.3 The greedy algorithm returns a optimal set A.
Implementation and Running Time
We can make our algorithm run in time O(n log n) as follows:
- Begin by sorting the n requests in order of finishing time and labeling them in this order; that is, we will assume that f(i) < f(j) when i < j. This takes time O(n log n).
- In an additional O(n) time, we construct an array S[1… n] with the property that S[i] contains the value s(i). We now select requests by processing the intervals in order of increasing f(i). We always select the first interval; we then iterate through the intervals in order until reaching the first interval j for which s(j) > f(1); we then select this one as well.
- So we implement the greedy algorithm analyzed above in one pass through the intervals, spending constant time per interval. Thus this part of the algorithm takes time O(n).
Extensions
The Interval Scheduling Problem we considered here is a quite simple scheduling problem. There are further complications that could arise in practical settings such as:
- We assumed that all requests were known to the scheduling algorithm when it was choosing the compatible schedule, but online algorithms must make decisions as time proceeds, without knowledge of future input.
- Our goal was to maximize the number of satisfied requests, there could exist a situation in which each request has a different value or weight to us (one thing was more important to have than another).
A Related Problem: Scheduling All Intervals
The Problem A related problem arises if we have many identical resources available and we wish to schedule all the requests’ using as few resources as possible. This is called the Interval Partitioning Problem because the goal here is to partition all intervals across multiple resources. We will now design a simple greedy algorithm that schedules all intervals using a number of resources equal to the depth. This immediately implies the optimality of the algorithm.
Designing the Algorithm Let d be the depth of the set of intervals. We show how to assign a label to each interval, where the labels come from the set of numbers {1, 2 ….. d}, and the assignment has the property that overlapping intervals are labeled with different numbers. This gives the desired solution, since we can interpret each number as the name of a resource, and the label of each interval as the name of the resource to which it is assigned. The algorithm we use for this (shown below) is a simple one-pass greedy strategy that orders intervals by their starting time.
Sort the intervals by their start times, breaking ties arbitrarily
Let I<sub>1</sub>, I<sub>2</sub> ..... I<sub>n</sub> denote the intervals in this order
For j = 1, 2, 3, ..., n
For each interval Ii that precedes Ii in sorted order and overlaps it
Exclude the label of I<sub>j</sub> from consideration for I<sub>j</sub>
Endfor
If there is any label from {1, 2, ..., d} that has not been excluded then
Assign a nonexcluded label to I<sub>j</sub>
Else
Leave I<sub>j</sub> unlabeled
Endif
Endfor
Analyzing the Algorithm If we use the greedy algorithm above, every interval will be assigned a label, and no two overlapping intervals will receive the same label. If you have d labels at your disposal, then as you sweep through the intervals from left to right, assigning an available label to each interval you encounter, you can never reach a point where all the labels are currently in use. Since our algorithm is using d labels, we conclude that it is, in fact, always using the minimum possible number of labels.
Theorem 4.6 The greedy algorithm above schedules every interval on a resource, using a number of resources equal to the depth of the set of intervals. This is the optimal number of resources needed.
I found that there was too much information in this section and it would have been easier to read if the authors would do less explaining and more listing of steps. The excess information made the section hard to read and there doesn't need to be so much explaining for the introduction to a new subject. I would give this section a 5/10 for readability. I think maybe another simple example or two of where a greedy algorithm would be the best solution could have been helpful.
Section 4.2: Scheduling to Minimize Lateness: An Exchange Argument
The Problem
Consider again a situation in which we have a single resource and a set of n requests to use the resource for an interval of time. Assume that the resource is available starting at time s. In contrast to the previous problem, each request is now more flexible. Instead of a start time and finish time, the request i has a deadline di, and it requires a contiguous time interval of length ti, but it is willing to be scheduled at any time before the deadline. Suppose that we plan to satisfy each request, but we are allowed to let certain requests run late. Thus, beginning at our overall start time s, we will assign each request i an interval of time of length ti; let us denote this interval by [s(i), f(i)], with f(i) = s(i) + ti. Unlike the previous problem, then, the algorithm must actually determine a start time (and hence a finish time) for each interval.
Designing the Algorithm
There are several natural greedy approaches in which we look at the data (ti, di) about the jobs and use this to order them according to some simple rule:
1. Schedule the jobs in order of increasing length ti, so as to get the short jobs out of the way quickly –> Ignores the deadlines of the jobs so does not work
2. A more natural greedy algorithm would be to sort jobs in order of increasing slack di - ti –> This greedy rule fails as well
3. Sort the jobs in increasing order of their deadlines di, and schedule them in this order. This rule is called Earliest Deadline First and there is an intuitive basis to it: we should make sure that jobs with earlier deadlines get completed earlier. –> This algorithm always produces optimal solutions. It is shown below.
Order the jobs in order of their deadlines
Assume for simplicity of notation that d<sub>1</sub> ≤ ... ≤ d<sub>n</sub>
Initially, f = s
Consider the jobs i = l, ..., n in this order
Assign job i to the time interval from s(i) = f to f(i) = f + t<sub>i</sub>
Let f = f + t<sub>i</sub>
End
Return the set of scheduled intervals [s(i), f(i)] for i = 1, ..., n
Theorem 4.7 There is an optimal schedule with no idle time.
Observe that the schedule the algorithm produces has no “gaps”–times when the machine is not working yet there are jobs left. The plan here is to gradually modify O, preserving its optimality at each step, but eventually transform it into a schedule that is identical to the schedule A found by the greedy algorithm. We refer to this type of analysis as an exchange argument. The main step in showing the optimality of our algorithm is to establish that there is an optimal schedule that has no inversions and no idle time.
Theorem 4.8 All schedules with no inversions and no idle time have the same maximum lateness.
Theorem 4.9 There is an optimal schedule that has no inversions and no idle time.
Theorem 4.10 The schedule A produced by the greedy algorithm has optimal maximum lateness L.
Extensions A more difficult version of this problem would contain requests i that, in addition to the deadline di and the requested time ti, would also have an earliest possible starting time ri, called the release time. Problems with release times arise naturally in scheduling problems where requests can take the form: Can I reserve the room for a two-hour lecture, sometime between 1 P.M. and 5 P.M.?
Section 4.3: Optimal Caching: A More Complex Exchange Argument
Section 4.4: Shortest Paths in a Graph
The Problem
Designing the Algorithm
Dijkstra’s Algorithm (G, //l//)
Let S be the set of explored nodes
For each u ∈ S, we store a distance d(u)
Initially S = {s} and d(s) = 0
While S ≠ V
Select a node v ∉ S with at least one edge from S for which
d’(v) = min<sub>e = (u,v):u∈s</sub> d(u) + //l//<sub>e</sub> is as small as possible
Add v to S and define d(v) = d’(v)
EndWhile
Analyzing the Algorithm
Theorem 4.14 Consider the set S at any point in the algorithm’s execution. For each u ∈ S, the path Pu is a shortest s-u path.
Implementation and Running Time
Theorem 4.15 Using a priority queue, Dijkstra’s Algorithm can be implemented on a graph with n nodes and m edges to run in O(m) time, plus the time for n ExtractMin and m ChangeKey operations.
