This is an old revision of the document!
Table of Contents
Chapter 6 – Dynamic Programming
My notes on the assigned sections of Chapter 6 of Algorithm Design by Jon Kleinberg and Éva Tardos. This chapter details dynamic programming. Dynamic programming involves “operating dangerously close to the edge of a brute-force search: although it's systematically working through the exponentially large set of possible solutions to the problem, it does this without ever examining them all explicitly.” This is usually achieved with memoization.
6.1 – Weighted Interval Scheduling: A Recursive Procedure
The weighted interval scheduling problem can be designed with a simple recursive approach. opt(j) = max(vj + opt(p(j)), opt(j-1)). We know that request j bleongs to an optimal solution on the set {1,2,…,j} if an only if vj + opt(p(j)) >= opt(j-1). Also, Compute-Opt(j) correctly computes OPT(j) for each j = 1,2,…,n. Adding in memoization can reduce the redundancy in all the Compute-Opt calls, storing the calls at certain values. This memoization marks the dynamic programming solution to weighted interval scheduling.
Input: n jobs (associated start time sj, finish time fj, and value vj)
Sort jobs by finish times so that f1 ≤ f2 ≤ ... ≤ fn
Compute p(1), p(2), …, p(n)
for j = 1 to n
M[j] = empty
M[0] = 0
M-Compute-Opt(j):
if M[j] is empty:
M[j] = max(vj + M-Compute-Opt(p(j)), M-Compute-Opt(j-1))
return M[j]
M-Compute-Opt(n)
6.2 – Principles of Dynamic Programming: Memoization or Iteration over Subproblems
Basic Outline of Dynamic Programming:
- There's a polynomial number of subproblems.
- The original problem's solution can easily be computed from the subproblems' solutions.
- There's an ordering of the subproblems from smallest to largest with a simply computable recurrence.
There is a sort-of chicken and egg relationship between subproblems and recurrences in dynamic programming. It isn't clear that a collection of subproblems will be useful until you find the linking recurrence. However, it can also be difficult thinking about recurrences without their subproblems.
6.3 – Segmented Least Squares: Multi-way Choices
This problem arises from the need to fit a line to some data points to achieve a “best fit.” What needs to be optimized is the fewest number of best fit lines possible to achieve the best fit. There are a lot of repeated checks of each point's error in a naive, brute-force solution. These checks can be memoized. The following implementation in the below two methods incorporate memoization to achieve a dynamic programming solution.
INPUT: n, p1,…,pn , c
Segmented-Least-Squares()
M[0] = 0
e[0][0] = 0
for j = 1 to n
for i = 1 to j
e[i][j] = least square error for the segment pi,…, pj
for j = 1 to n
M[j] = min 1 ≤ i ≤ j (e[i][j] + c + M[i-1])
return M[n]
FindSegments(j):
if j = 0:
output nothing
else:
Find an i that minimizes ei,j + c + M[i-1]
Output the segment {pi,...pj}
FindSegments(i-1)
Overall, I'd give this section a 9/10 on readability and interestingness.
