This is an old revision of the document!


Section 6.1: Weighted Interval Scheduling: A Recursive Procedure

Weighted Interval Scheduling

  • Job j starts at sj, finishes at fj, and has weight or value vj
  • Two jobs are compatible if they don't overlap
  • Goal: find maximum weight subset of mutually compatible jobs
  • Notation: Label jobs by finishing time: f1≤ f2 ≤ … ≤ fn
  • Definition: p(j) = largest index i < j such that job i is compatible with j

The recursive algorithm that computes the optimal schedule, now with weights for importance of each request/job:

Compute-Opt(j)
    If j----0 then
        Return 0
    Else
        Return max(v<sub>j</sub> + Compute-Opt(p(j)), Compute-Opt(j - 1))
    Endif

The correctness of the algorithm follows directly by induction on j: Compute-0pt(j) correctly computes OPT(j) for each j = 1, 2, …, n. But, unfortunately, if we really implemented the algorithm Compute-Opt as just written, it would take exponential time to run in the worst case. To come up with a better algorithm we must implement the concept of memoization, a technique that involves storing values that have already been computed. Below is the new algorithm that runs in O(n) time at worst.

M-Compute-Opt(j)
    If j = 0 then
        Return 0
    Else if M[j] is not empty then
        Return M[j]
    Else
        Define M[j] = max(v<sub>j</sub> + Compute-Opt(p(j)), Compute-Opt(j - 1))
        Return M[j]
    Endif

So far we have simply computed the value of an optimal solution; but presumably we want a full optimal set of intervals as well. It would be easy to extend M-Compute-0pt so as to keep track of an optimal solution in addition to its value. This extension of the algorithm is shown below.

Find-Solution(j)
    If j = 0 then
        Output nothing
    Else
        If v<sub>j</sub> + M[p(j)] ≥ M[j - 1] then
            Output j together with the result of Find-Solution(p(j))
        Else
            Output the result of Find-Solution(j - 1)
        Endif
    Endif

Since Find-Solution calls itself recursively only on strictly smaller values, it makes a total of O(n) recursive calls; and since it spends constant time per call, we have Find-Solution returns an optimal solution in O(n) time.

Section 6.2: Principles of Dynamic Programming: Memoization or Iteration over Subproblems

Iterative-Compute-Opt
    M[O] = 0
    For j = l, 2 .....n
        M[j] = max(v<sub>j</sub> + M[p(j)], M[j - 1])
    Endfor

Section 6.3: Segmented Least Squares: Multi-way Choices

courses/cs211/winter2018/journals/boyese/chapter6.1522186441.txt.gz · Last modified: by boyese
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0