This is an old revision of the document!
Table of Contents
Chapter 6: Dynamic Programming
Section 6.1: Weighted Interval Scheduling: A Recursive Procedure
The weighted interval scheduling problem is a generalized version of the interval scheduling problem where each of the intervals has a certain value and we are trying to maximize the value of the set. To create this set of intervals with maximum value we first design a recursive solution. First, we sort the intervals in order of nondecreasing finish time and define p(j) to be the largest index i< j such that the interval i is compatible with interval j. For each interval there are two choices; it is either in the optimal solution or it is not in the optimal solution. If the interval is in optimal solution, then no interval with an index between the interval and it's p value is included. If the interval is not in the optimal solution, then the optimal solution will include the optimal value from 1 to that interval. In order to find the optimal solution, we choose the max of these to options (page 254 of the text). However, this algorithm runs in O(2^n) time, which is not efficient. Therefore, we memoize the recursion. Memoizing is the process of saving values that we have already computed nd placing them in an array. This way instead of computing the optimal values over and over we can just index the array for the values. Creating an algorithm with the memoized array (page 256 in the text) reduces the run time to O(n) if the input is sorted by finish times. These algorithms only compute the numerical value of the solution, but we want the set of intervals as well. In order to do this, after computing the optimum value, we go back and recover the intervals we used to find this value. This algorithm (on page 258 of the text) runs in O(n) time.
I'm still a little confused on how exactly to memoize an algorithm and how to find the value. How is the algorithm able to find the intervals in O(n) time in Find-Solution?
This section was wordy, but our discussion in class made it so that I understood how the algorithm worked. I would give it an 8/10.
Section 6.2: Principles of Dynamic Programming: Memoization or Iteration over Subproblems
This section breaks down the basic principles of dynamic programming after going through the example in section 1. First, in order to compute the value of the optimal solution we must choose whether the sub problem is in the optimal solution or not in the solution. Then we store these values in a memoized array. This will run in O(n). The basic ideas of a dynamic programming is to divide the problem into subproblems that there are only a polynomial number of, that the solution to the original problem can be computed from, and that have a natural ordering from smallest to largest. Often, when creating the dynamic algorithms, we first define a recursive solution, then we unpack the recurrence and create a solution that will run in less time.
The motives behind dynamic programming are to solve problems that do not have a natural greedy solution and that cannot be easily solved by divide and conquer algorithms.
This section helped to clarify some of my questions about the first problem we solved. it was short and provided a solid map for other types of dynamic programming problems. I would give this section a 8/10.