This is an old revision of the document!


Chapter 6: Dynamic Programming

Dynamic programming is a combination of divide and conquer and brute-force strategies: it (implicitly) looks at all possible solutions by decomposing problems into smaller ones, and it subtly takes advantage of things it knows to pick the best solution without explicitly looking at each one. The authors liken it to a balancing act that takes time to become comfortable with. Dynamic Programming is the antithesis of the greedy strategy because it doesn't attempt to build up the solution at every step. Often it is more powerful than the other techniques we've learned. As usual, we start with an example rather than with an attempt to define the abstract strategy.

6.1 Weighted Interval Scheduling: A Recursive Procedure

I learned a new word: “nondecreasing” (and I like it, since it takes care of those flat slopes which “increasing” fails hard at). Anyway, in this problem, the requests are ordered according to finish time, with the latest finish times at the end. The crucial idea behind this problem is: either the optimal solutions includes the last request, j, or it doesn't. This means the optimal solution can be expressed as the choice between including that request (thus adding the value of that request to the total and dealing with the remaining requests compatible with j) and not including it (finding the optimal solution of the first j-1 requests).

(6.1) OPT(j) = max(Vj + OPT(p(j)), OPT(j-1))

This is a recursive process that is hard to visualize at first. The initial problem with designing an algorithm is that is seems to require an exponential number of recursive calls (but does it really?!). Nah, we should realize that there are only ~n problems that need to be solved. Just like with Fibonacci, we don't need to compute F(2) a million times. We can compute it only once and record its value in an easy-to-access array. This is the idea behind memoization. With the weighed interval schedule problem, a memoized array allows the algorithm to make O(n) recursive calls, rather than an exponential number of them.

(6.5) Given the array M of the optimal values of the sub-problems, Find-Solution returns an optimal solution in O(n) time.

6.2 Principles of Dynamic Programming: Memoization of Iteration over Subproblems

Recap/basics of dynamic programming: In the WIS problem, converted an exponential-time recursive algorithm to a polynomial-time recursive algorithm by memoization. Realize that recursion is not necessary; an iterative algorithm to compute M is all we need, since once we have M, we have the solution. Start at M[0] and iterate through M until M is full. At each stage, it should write OPT(j) to M(j).* The running time of the iterative algorithm is O(n), as long as the principles hold.

Basic principles (p. 260):

  • (i) There are only a polynomial number of subproblems.
  • (ii) The solution to the original problem can be easily computed from the solutions to the subproblems.
  • (iii) There is a natural ordering on subproblems from “smallest” to “largest,” together with an easy-to-compute recurrence that allows one to determine the solution to a subproblem from the solutions to some number of smaller subproblems.

* There is a good picture that displays this (Figure 6.5, p. 260).

6.3 Segmented Least Squares: Multi-way Choices

Problem: partition a set of points into some number of segments. Drawing n-1 segments (one segment between every two points) is not always optimal, since each line segment has a cost (and sometimes graphs are linear enough to warrant longer segments).

Goal: minimize the sum of squared errors at each point plus the cost of creating x line segments.

We know the last point (j) is the endpoint of a segment whose starting point is somewhere earlier (i). This means the optimal partition for all of the points (1 through j) can be written as OPT(j) = squared_error_cost(i,j) + C + OPT(i-1).

This leads to (6.7) and from that we can come up with an algorithm.

6.4 Subset Sums and Knapsacks: Adding a Variable

The knapsack problem requires us to think about the remaining weight in the knapsack. This is why its subproblems require two parameters instead of just one (if an item is added to the knapsack, then we must subtract that object's weight from the total available weight for the knapsack). We memoize this information in an n-by-W array which we can fill up in O(nW) time.

6.5 RNA Secondary Structure: Dynamic Programming over Intervals

Problem outline: Knowing secondary structures is useful. There are basic rules that govern RNA base pairing. We think nature produces secondary structures that have the maximum number of base pairs. Therefore, to predict secondary structure, we want an algorithm that takes as input the RNA-bases and outputs the secondary structure that maximizes the number of base pairs for that set.

Solution outline: The recurrence is different from others we've seen. One rule is that bases in a base pair must be separated by at least 4 other bases. Therefore the solution for any number of bases less than 6 is the empty set. For j greater than 5, we notice that base j is either in a pairing or it isn't. But we cannot phrase our recurrence only in terms of j. (OPT(j) won't work.) We need to include another variable i because of the crossing condition (another pairing rule). We need to know the first and last bases that are possible for a pairing. But it is not always the case that base 1 is the first base that could possibly be included in a pairing. Sometimes bases are out of the question. Therefore the recurrence looks like this instead:

(6.13) OPT(i,j) = max(OPT(i,j-1), max(1 + OPT(i,t-1) + OPT(t+1,j-1)

See Figure 6.16 for an illustration of building up the subproblems. The solution we want is OPT(1,n). The running time of the algorithm is bounded by O(n^3).

6.6 Sequence Alignment

Problem outline: Need a measure by which to compare strings for similarity. Must “define the notion of similarity between two strings” (K&T, 279). Search engine queries, computational biology are applications of sequence alignment.

Solution outline: Find a minimum-cost alignment between the two strings. Alignments are composed of gaps and mismatches. The total cost of all gaps and mismatches is a measure of how dissimilar two strings are. So to see how similar two strings are, we need the minimum-cost alignment between them. We do this with the following recurrence relation:

(6.16) OPT(i,j) = min(mismatch_cost(xi,yi) + OPT(i-1,j-1), gap_cost + OPT(i-1,j), gap_cost + OPT(i,j-1))

6.7 Sequence Alignment in Linear Space via Divide and Conquer

6.8 Shortest Paths in a Graph

courses/cs211/winter2011/journals/charles/chapter6.1301453499.txt.gz · Last modified: 2011/03/30 02:51 by gouldc
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0