Table of Contents
6.1 Weighted Interval Scheduling
- A greedy algorithm can efficiently solve unweighted interval scheduling, but when the intervals are weighted it fails: the greedy algorithm may choose a task that precludes a later, heavily weighted task. We can solve this without resorting to BFI by limited backtracking: again go through the list by order of finish time, but now instead of committing to schedule available tasks recursively consider the remaining possibilities conditional on including and excluding it, including it if its weight plus the aggregate weight of the former exceed the aggregate weight of the latter. A naive interpretation gives exponential runtime, but by memoizing the partial results one can reduce it to O(n) (aside from the recursive call, the computation is constant, and there can be no more than one recursive call made per task after memoizing. Finding not only the value but the solution efficiently requires a scan through the array of intermediate results.
- No questions.
- No comments.
- No complaints.
6.2 Principles of Dynamic Programming
- This solution can be expressed not only as a recursive calculation with memoization but also as iteration over subproblems. Instead of recursively solving the subproblems as needed, start with the last (what happens if the second-to-last task is included) and work backwards. The details are the same, the difference being merely in the logic behind scheduling solution of the subproblems. In general, dynamic programming is useful when there are a polynomial number of subproblems, a convenient way to move from solutions to the subproblems to solutions to the main problem, and a natural ordering to the subproblems.
- How is their criterion ii different from the second part of iii (the existence of a recurrence)?
- A note, at least, on the polynomial number of subproblems (and why the iterative setup is not always equivalent to the
recursive): in Abstract Algebra we recently got an extra-credit problem implementing a dynamic algorithm finding the number of three-free permutations of the first n natural numbers (rather convenient timing, really). The algorithm determined, given a set of numbers yet to be placed, whether each one of them could be placed next; for each one that could, the algorithm was recursively applied to determine the number of valid solutions that placed that number next. There are really 2^n subproblems, but most of them prove inaccessible (I was able to calculate up to n=62 despite a rather memory inefficient memoization structure). An iterative algorithm there would be exponential time; the recursive algorithm was not great, but still quite workable (I do not know of a tighter bound than O(2^n) space and O(n^2*2^n) time, but it is experimentally much lower).
- No complaints.
6.3 Segmented Least Squares
- Dynamic programming does not require binary choice. Segmented least squares attempts to determine the natural number of kinks in a set of points, minimizing the sum of deviations from the solution and a penalty for each break. One can thus make a pass through recursively determining the best way to fit a line to subsets of the point. OLS takes O(n^3) and the rest O(n^2).
- No questions.
- No comments.
- No complaints.
6.4 Subset Sums and Knapsacks
- The knapsack problem is the selection of a subset (without replacement) of values that has sum as close as possible to a target. Here one cannot recurse merely over items, since unlike in weighted interval schedule the subproblems for a given choice are not independent of the choices already made (they affect the target). Therefore, one needs to define subproblems in two variables, the set of goods to be considered and the weight still to fill (but one can still make the choice sequentially, so that there are n sets defined by their greatest element). Total running time is O(nW), the result of filling in nW matrix entries in O(1) time each. The knapsack problem is a generalization, maximizing the sum of value subject to a weight constraint. However, the same recurrence holds, with merely a different definition of optimality, and it can also be solved in O(nW).
- No questions.
- No insights.
- Again with the introduction of the false starts first? I can see their reasons, but still do not much like it.