Chapter 6: Dynamic Programming

6.1 - Weighted Interval Scheduling: A Recursive Procedure

Our Interval Scheduling algorithm from the previous chapter does not take into account weighted job values. For this problem, we use dynamic programming. Arrange the finish times of jobs in ascending order. Jobs that don’t overlap are compatible.

The optimal value of a list with j elements = max( weight of j + OPT(compatible element before j), OPT( j -1 )).

Request j belongs to an optimal solution on the set {1,2,…,j} iff: weight(j) + OPT(compatible before j) >= OPT ( j - 1 ).

Dynamic programming is based on a recurrence equation that expresses the optimal solution in terms of the optimal solutions to smaller problems.

Memoizing:

M-Compute-Obj(j)

If j=0 then
	Return 0
Else if M[j] is not empty then
	Return M[j]
Else
	Define M[j] = max(v(j)+M-Compute-Opt(p(j)), M-Compute-Opt9j-1))
		Return M[j]
End if

The running time of this algorithm is O(n) if intervals are sorted by finish times. Given the array M of the optimal values of the sub-problems, optimal solution can be return in an additional O(n).

6.2 - Principles of Dynamic Programming: Memoization or Iteration over Subproblems

The key to an efficient dynamic algorithm is the array M. It stores all values of subproblems up to n. In this section, we implement an iterative algorithm.

Iterative-Compute-Opt

M[0] = 0
For j=1…n
	M[j] = max(v(j)+M[p(j)],M[j-1])
Endfor

Running time for Iterative-Compute-Opt is O(n).

Properties of required of dynamic programming: -There are only a polynomial number of subproblems. -The solution to the original problem can be easily computed from the solutions of subproblems. -There is a natural ordering on subproblems from smallest to largest.

6.3 - Segmented Least Squares: Multi-way Choices

In multi-way choices, we have a polynomial number of possibilities to consider for the optimal solution. Our goal is to find the line with minimum error by detecting change in the direction of the points. First we must partition the graph into a number of line segments. The penalty of a partition:

-The number of segments into which we partition P, times a fixed, given multiplier C > 0 -For each segment, the error value of the optimal line through that segment

If the last segment of the optimal partition is p(i),…,p(n), then the value of the opt solution is OPT(n) = e(i,n) + C + OPT(i-1). We also apply this to subproblems. Once again, we apply memoization to compute OPT(n). the total running time is O(n^2) once all e(i,j) values are determined.

6.4 - Subset Sums and Knapsacks: Adding a Variable

Adding durations and deadlines: We want to process jobs so as to keep the machine as busy as possible up until cut-off point W. This Subset Sum Problem strives for a subset with the largest possible time value. The Knapsack Problem is where each request has both a value and a weight (its weight must be less than W). Similar to before before, if n is not in the optimal set, then OPT(n,W) = OPT(n-1,W). Next we check for the case where n,W is in the optimal solution and recursively check for the best set based on the last element n.

If w <w(i) then OPT(i,w) = OPT)i-1,w) Otherwise OPT(i,w) = max(OPT(i-1,w),w(i) + OPT(i-1,w-w(i)).

The algorithm runs in O(nW) time. The table of optimal vales can be used to find the set S in O(n) time.

courses/cs211/winter2014/journals/colin/chapter6.txt · Last modified: 2014/03/26 05:09 by mohnacsc
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0