Chapter 6.1:,Weighted Interval Scheduling

The original interval scheduling problem is actually just the special case of weighted interval scheduling where all values are equal to 1. In that case, most greedy algorithms did not solve the problem. So, in the more general weighted interval scheduling problem, no natural greedy algorithm is known to solve this problem. Because of this, we switch to dynamic programming. We still have n requests labeled 1,…, n with each request i specifying a start time s_i and a finish time f_i. Each interval i now have a value or weight v_i. Two intervals are compatible if they don’t overlap. The goal of this problem is to select a subset of mutually compatible requests so as to maximize the sum of the values of the selected intervals.

Let’s suppose the requests are sorted in order of non-decreasing finish time, f_1 < = f_2 < = …< = f_n. We’ll say that request i comes before j if i<j. We will define p(j), for an interval j, to be the largest index i<j such that intervals i and j are disjoint and p(j)=0 if no request i<j is disjoint from j.

Now let’s consider an optimal solution O for this problem. Either job n is in O or it is not. If n is in O, then no jobs between p(n) and n can be in O by construction of p(n). So the other jobs in O must be the optimal solution to the problem consisting of jobs 1,…, p(n). If n is not in O, then the optimal solution must be the same as the optimal solution to the problem consisting of jobs 1,…, n-1. Therefore Opt(j)=max(v_j + Opt(p(j)), Opt(j-1)). We know that request j belongs to an optimal solution if and only if v_j + Opt(p(j)) > = Opt(j-1).

An Algorithm

Compute-Opt(j):

  • If j = 0 then return 0
  • else return max(v_j+Compute-Opt(p(j)), Compute-Opt(j-1))

The problem with this algorithm is that it runs in exponential time in the worst case. For each j=2, 3, 4,…, n Compute-Opt generates separate recursive calls on problems of sizes j-1 and j-2, so the total number of calls will grow like the Fibonacci numbers.

Memoization

In order to eliminate the redundancy in the above algorithm, we could save the values that have already been computed by using a technique called memoization.

Memoized Algorithm

M-Compute-Opt(j):

  • if j = 0 then return 0
  • else if M[j] is not empty then return M[j]
  • else:
  • define M[j] = max(v_j+M-Compute-Opt(p(j)), M-Compute-Opt(j-1))
  • return M[j]

The looks very similar to the first algorithm, but by storing previously calculated values, we cut the running time down to O(n). This returns the value of the optimal solution, but we also need to know which jobs to choose in order to achieve this optimal value.

Find-Solution(j):

  • if j = 0 then output nothing
  • else:
  • if v_j+M[p(j)] > = M[j-1] then output j together with the result of Find-Solution(p(j))
  • else output the result of Find-Solution(j-1)

This algorithm calls itself recursively on strictly smaller values, making a total of O(n) recursive calls.

I would rate this section a 10. It was very easy to read, especially because we have spent so much time hammering out the details of memoization in class!

courses/cs211/winter2014/journals/alyssa/chapter_6.1.txt · Last modified: 2014/03/25 20:49 by hardnetta
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0