This is an old revision of the document!
Table of Contents
Chapter 6: Dynamic Programming
6.1: Weighted Interval Scheduling: A Recursive Procedure
We can create a more general version of the interval scheduling problem, solving it without using a greedy algorithm. Using Dynamic Programming, we can account for the weights not all having the same value. The solution using dynamic programming involves a recursive algorithm. We still have n requests with a start time and a finish time, however, each interval has a value. Two intervals are still compatible if they don't overlap, and our goal is to maximize the total value of selected intervals.
If we sort the requests by finish time, we can get a natural ordering to consider intervals. We do not want to have any disjoint intervals. We still must consider an optimal solution as we would looking at a greedy algorithm. However, finding the optimal solutions here involves us looking at the optimal solutions for smaller problems. We can solve this recursively, where OPT(j)=max(vj+OPT(p(j)), OPT(j-1), and we only add j to the optimal solution if vj+OPT(p(j)) >= OPT(j-1). This gives us a recursive algorithm to compute the optimal solution.
This method, however, has an exponential runtime, which is very undesirable. We can use a tree structure to visualize the compute-opt algorithm. We can modify the algorithm to have a polynomial run time. This process is called memoization, and makes use of a global accessible value that recursive calls can access. This allows us to have a run time of O(n).
As of right now this solution only provides us with the optimal value. If we wanted to know the optimal set of intervals as well, we would need to extend our memoized algorithm to keep track of the intervals. This would also have a runtime of O(n).
6.2: Principles of Dynamic Programming: Memoization or Iteration over Subproblems
We can now develop a general algorithm for dynamic programming to iterate over subproblems. The key to this is keeping track of an array, M. It keeps track of each of the subproblems. We can compute the entries in the array through an iterative algorithm rather than a recursive one. This algorithm would run in O(n) time as it only runs for n iterations and spends constant time on each iteration.
Problems solved by dynamic programming share the following properties: there are a polynomial number of subproblems, the solution to the overall problem can be computed from the solutions to the subproblems, there is a natural ordering on the subproblems.It may be difficult to determine whether it is easier to figure out how to link the subproblems together first, or how to solve the subproblems first.
