This problem is a strictly more general version, in which each interval has a certain value(or weight), and we want to accept a set of maximum value. Designing a Recursive Algorithm. No natural greedy algorithm is known for this problem, which is what motivates our switch to dynamic programming. We have n requests with each request specifying a start and finish time. Each interval has a value or weight. Two intervals are compatible if they do not overlap. Goal: Choose a subset of mutually compatible intervals, so as to maximize the sum of the values of the selected intervals.

Suppose our requests are sorted in order of non decreasing finish time. We define p(j) for an interval j to be the largest interval i<j such that intervals i and j are compatible. Let's consider an optimal solution O, there is something obvious we know about O: either interval n belongs to O, or it doesn't. If it belongs to O, then clearly no interval indexed strictly between p(n) and n can belong to O. But if n is not in O, then O is simply equal to the optimal solution to the problem consisting of requests {1,2,3…,n-1}.

OPT(j)=max(vj + OPT(p(j)), OPT(j-1)), for any j between 1 and n. The first part of the tuple in our general statement means j is in O, and the latter means j is not included in O. And how do we decide whether n belongs to the optimal solution? When vj + OPT(p(j)) >= OPT(j-1). A recursive algorithm to compute OPT(n) is on Page 254. Unfortunately, if we implemented the algorithm as given so far, it would take exponential time to run in the worst case.

Memoizing the Recursion. The fact that our algorithm runs in exponential time as written is simply due to the spectacular redundancy in the number of times it issues each of these calls. How could we eliminate this redundancy? We could store the value of Compute-Opt in a globally accessible place, the first time we compute it and simply us it in place of all future recursive calls. This technique is referred to as memoization. An updated version of Compute-Opt is on Page 257. The running time of M-Compute-Opt is O(n) assuming the input intervals are sorted by their finish times. Proof on Page 257.

Computing a Solution in Addition to Its Value. We know the value of the computed optimal solution so far. It would be easy to extend M-Compute-Opt so as to keep track of an optimal solution in addition to its value. Maintaining an array to keep track of the optimal solutions would blow up the run time by O(n) time. We can avoid this O(n) blow-up by not explicitly maintaining an array, but rather by recovering the optimal solution from values saved in the array M after the optimum value has been computed. Finally Algorithm on Page 258. Given an array M of the optimal values of the sub-problems, Find-Solution returns an optimal solution in O(n) time.

This section was convincing and gave me a basic understanding of dynamic programming, scale of 8/10.

courses/cs211/winter2014/journals/fred/6.1_weighted_interval_scheduling_a_recursive_procedure.txt · Last modified: 2014/03/24 11:13 by gisaf
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0