This is an old revision of the document!
Greedy Algorithms
Greedy algorithms essentially use a localized decision making process to yield a globally optimal solution. Fortunately, the ability of a greedy algorithm to solve a nontrivial usually reveals a great deal about the structure and nature of the problem itself. In the first two sections of the chapter we will work through two example problems to illustrate two typical methods of proving the greedy algorithm indeed yields an optimal solution:
- greedy stays ahead
- exchange (and structural) argument.
4.1: Interval Scheduling: Greedy Stays Ahead
The Interval Scheduling Problem consists of a set of n requests for use of a single resource, each ith request corresponding to an interval of time starting at s(i) and ending at t(i). Essentially, we want to get the maximum number of “jobs” scheduled where none of them overlap. The efficient method of this is a greedy algorithm.
In this particular case, the book (and in class, we) discussed that the greedy rule should be based on accepting the valid (meaning does not overlap with the jobs already set) job which finishes first–the job which minimizes f(i). This maximizes the open time after adding a single job. So, we let R denote the list of jobs we haven't processed yet, and A be the set of jobs we've accepted in building our final schedule. The algorithm (as discussed in class, because the book's version is far too removed from actual implementation) is thus as follows:
- Sort jobs in R by finish times so that f(1)=<f(2)=<…=<f(n)
- A = {}
- for j=1 to n
- if job j compatible with A
- add j to A
- return A
Analyzing the Algorithm
Firstly, its clear that A is indeed a compatible set of jobs. Next, we must show that A is optimal. Suppose that O is an optimal set of intervals, thus we must show that |A|=|O|. Let ir and jr be the rth job in each final set.
Note then that our greedy rule guarantees that f(i1)=<f(j1), because the greedy rule automatically pick the one with the lowest end time. Then, through induction we can prove that
- for all indices r=<k we have that f(ir)=<f(jr).
Then, if A is not optimal, then O would have to have more jobs. But, we know that up to equal number of jobs, the end time of the greedy solution's last job is less than the end time of that same numbered job in the optimal, meaning that the additional one in the optimal would have been grabbed by the greedy algorithm too–a contradiction.
So, we know A returned by the greedy algorithm is indeed optimal.
Runtime
Sorting the jobs takes O(n logn) time as we know. Then, the comparisons and adding each take simply constant time because we can maintain the value of the end time of G and simply compare the start time of the next jobs. So, the for loop is O(n), and our whole algorithm runs in O(n logn).
