Section 5.5 Section 5.5 looks at integer multiplication, and how divide and conquer can be used to solve it. The algorithm students learn in elementary school (a.k.a. adding up the partial solutions) has a run time of O(n-aquared) becuase it takes O(n) to compute the partial product and there are n partial products. A better solution is found in this section by breaking down the product into further partail sums. One option is to recursively compute the results for four n/2 bit instances, and then combining them using the euation: x1y1 * 2(to the n) + (x1y0 + x0y1) * 2(to the n/2) + x0y0. Overall this approach takes T(n)⇐ 4T(n/2) + cn, for some constant c, which unfortunately also takes O(n-squared) time. Therefore, to improve the run time of this algorithm we need to decrease the number of recursive calls from 4 to 3, which ultimately produces a run time less than O(n-squared). To decrease the recursions from 4 to 3, we must solve each part of the equation separately and then recursively combine them. By creating this Reverse-Multiply algorithm, we now have a 3-way branching recursion to solve a multiplication with a running time that satifies T(n) ⇐ 3T(n/2) + cn.

Chapter 6 - Dynamic Programming Chapter 6 discusses dynamic programming. The basic idea of dynamic programming is drawn from the intuition behind divide and conquer and is essentially the opposite of the greedy strategy: one implicitly explores the space of all possible solutions, by carefully decomposing things into a series of subproblems, and then building up correct solutions to larger and larger subproblems.

Section 6.1 - Weighted-Interval Scheduling This section looks at the first example of dynamic programming, which is the Weighted Interval Scheduling Problem, where each interval has a specific value/weight, and we want to accept a set of maximum value. When designing an algorithm, we first look at the n requests, with each request i specifying a state time si and a finish time fi, and each interval has a value of pi. As we saw with the normal interval scheduling problem, two intervals are compatible if they do no overlap. The key to this problem is that for the optimal solution O, either interval n belongs in O, or it does not. If n does belong in O, then no intervals that overlap with n can also be in the solution. But if n does not belong in O, then the optimal solution equals the optimal solution consisting of all intervals, minus n. Therefore, for each request j that we look at: either j is in O, in which case OPT(j) = vi + OPT(p(j)), or j is not in O, in which case OPT(j) = OPT(j-1). Thus OPT(j) = max( vi + OPT(p(j)), OPT(j-1)). Yet, leaving the algorithm as is would produce O(n-squared) time, so we use memoization to reduce the run time. Memoization reduces the run time by eliminating the redundances of the calls. By doing this, the running time of M-Compute-Opt(n) reduces to O(n), if we assume teh input intervals are sorted by their finish times. Yet, computing the optimum solution isn't the same as finding the optimum solution. To find the solution, we write an algorithm Find-Solution(j) that calls itself recursively only on strictly smaller values, and makes a total of O(n) recursive calls. Therefore, given the array M of the optimal values of the subproblems, Find-Solution return an optimal solution in O(n) time.

Section 6.2 - Principles of Dynamic Programming: Memoization or Iteration over Subproblems In Section 6.2, the book discusses how iterating over a subproblem can be more efficient than computing solutions recursviely. Looking at the Weighted-Interval Scheduling Problem again, we can go further and directly compute the entries in M by an iterative algorithm (rather than memoization). To do this, we start with M[0] = 0 and keep incrementing j; each time we need to determine a value M[j] we just the max solution from above. The run-time of Iterative-Compute-Opt is O(n) as well because it runs for n iterations and spends constant time at each. To start developing an algorithm based on dynamic programming, one needs a collection of subproblems that satisfies properties: 1- there are only a polynomial number of subproblems, 2- the solution to the original problem can be easily computed from the solutions of the subproblems, and 3- there is a natural ordering on subproblems from “smallest” to “largest”, toether with easy-to-compute recurrence and that allows one to determine the solution to a subproblem from teh solutions to some number of smaller subproblems. Lastly, it is important to note that it may be useful to first define a recurrence by reasoning about the structure of an optimal solution, and then determine which subproblems will be necessary to unwind the recurrence.

Section 6.3 - Segmented Least Squares: Multi-way Choices The problem looked at in this section is the segmented least squares problem. Suppose the data consists of a set P of n point on a plane, denoted (x1, y1), (x2, y2), … , (xn, yn); and suppose x1<x2<…<xn. Given the line L (y = ax + b), the error of L with respect to P is the sum of its squared 'distances' to the points in P. However, any single line through the points would essentially have terrible error, so we develope a new problem. The new problem is to fit the points well, using as few lines as possible. Formulating the Problem: Each segment is a subset of P that represents a contiguous set of x-coordinates; subset = {pi, pi+1,…, pj-1, pj} for some indices i≤j. Then for each segment S in our partition of P, compute the line minimizing the error with respect to the points in S, according to the formulas given. The penalty of a partition = the sum of the following terms: 1- the number of segments into which we partition P, times a fixed multipler C>0 and 2- for each segment, the error value of the optimal line through that segment. So as the number of segments increases, the penalty terms decrease (part 2), but the terms in part1 increases. Designing the Algorithm: It is important to note that the last point pn belongs to a single segment in the optimal partition, and that segments begins at some point pi. Therefore, if the idenity of the last segment Pi,…,pn is known, then we could remove those points from consideration and recursively solve the problem on teh remaining points p1, …., pi-1. So if the last segment of the optimal solution is pi,….,pn, then the value of the optimal solution is OPT(n) = Ei,n + C + OPT(i-1). For the subproblem on the points p1,…,pj, OPT(j) = min(Ei,j + C + OPT(i-1)) where 1≤i≤j, and the segment pi,…,pj is used in an optimum solution for the subproblem iff the minimum is obtained using index i. From this, the optimal solution can be built up from the solutions OPT(i) in order of increasing i. Same with the Weighted-Interval Problem, there are two separate algorithms, one for computing values of the segmented-least-squares and one finding the line segment. Unfortunately these algorithms have large run times. Segmented-Least-Squares(n) has a runtime of O(n-cubed) because there are n-squared pairs (i,j) and for each pair Ei,j is computed in O(n) time, and Find-Segment(j) has a runtime of O(n-squared) once all the Ei,j values have been determined.

Section 6.4 - Subset Sums and Knapsacks: Adding a Variable In this section we look at the Knapsack problem where there are n items and each is given a weight wi, and each request i has a value vi and weight wi. Thus the problem is to make the knapsack as full as possible (with max capcity of W). This problem is similar to the Weighted-Interval Problem because as in W-I-P, the key to the problem is that the item is either included in the sack or not. For this problem, we don't include item n when the nth term is too big (W<wn), so OPT(n,W) = OPT(n-1, W). But if we do include n, then OPT(n,W) = wn + OPT(n-1, W - wn), where W - wn is the new W. So as before, the algorithm is designed to build up a table of all OPT(i,w) values while computing each of them only once. Therefore, after building up a table of solutions M, and each of the values M[i,w] have been computed in O(1) time by using the previous values the algorithm Subset-Sum(n, W) runs in O(nW) time. This runtime is known as pseudo-polynomial. To recover an optimal set S of items, an algorithm can trace back through the array M by a procedure similar to those we devloped in the previous secitons: Given a table M of the optimal values of the subproblems, the optimal set S can be found in O(n) time.

courses/cs211/winter2011/journals/ashley/chapter_6.txt · Last modified: 2011/03/16 16:21 by leinwebera
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0