Chapter 6

Intro

Describes dynamic programming as “operating dangerously close to brute force…systematically working through the exponentially large set of possible solutions to the problem…without ever examining them all explicitly.” If there were a Hollywood movie “Dynamic Programming” I think that sentence would make a pretty good dramatic voiceover for the trailer.

6.1

Shortly introduces the weighted interval problem–noting that the greedy algorithm for the unweighted interval scheduling problem works only for the special case where all weights are the same. Like greedy, their dynamic programming algorithm orders the intervals in 'nondecreasing finish time'. Then they articulate the “dichotomy” in the problem that the last interval “belongs to [the optimal solution] O…or it doesn't.” If it does then no incompatible intervals are in O, and if it doesn't, then you just hack the end off–so they come up with a little recurrence relationshippy thing

OPT(j) = max(v_j + OPT(p(j)), OPT(j - 1)).

Then, they write the recursive algorithm like that, but uh oh…turns out to be exponential because you have to recalculate the value every time. But then comes along memoization to save the day, which just stores the value of every run of OPT(j) so it doesn't have to be recalculated in later calls. Brilliant. Last, they cover tracing back through the memoization array to identify the components of the solution instead of just the value.

Readability: 8/10

6.2

This section just points out that the whole recursion thing is unnecessary. You can't compute the final values of the solution (the end of the memoization array) unless you have the first value–so you might as well quit trying to start from the end and doing all those backwards calls–just start constructing the array iteratively from the beginning.

They then list three requirements for dynamic programming to be applicable–this was either the day I missed class, or I wasn't paying attention, or this wasn't in the lecture, so these seemed interesting to me so I'll go ahead and write them down: “(i) There are only a polynomial number of subproblems.” “(ii) The solution to the original problem can be easily computed from the solutions to the subproblems. (For example, the original problem may actually be one of the subproblems.) ”(iii) There is a natural ordering on subproblems from “smallest” to “largest,” together with an easy-to-compute recurrence that allows one to determine the solution to a subproblem from the solutions to some number of smaller subproblems.“

Readability: 8/10

6.3

This section applies dynamic programming to the segmented least squares problem. Kind of neat coincidence, just encountered the whole least squares problem for the first time in INTR 202–that's the odd thing about these “idea” things–you stumble across them just once and they never stop haunting your footsteps, you ever notice that? It was a useful read, considering I missed this day of class for sure.

Anyway, the idea of the least squares problem is that you want to minimize the sum of least squares–but not with just one straight line, with multiple straight lines. How many straight lines? Well that's something the algorithm needs to detect–of course you can get a better fit the more segments you have–but you're looking for an approximation, and a solution with too many lines isn't really going to be useful. So, you define for the algorithm some penalty “C” to assign to each segment. There's no objectively correct value for C, I guess–it's sort of subjectively selected by whoever's using the algorithm, I suppose, to suit their needs. So you're trying to minimize a “score” that consists of C*the number of segments + the aggregate error of all the segments.

Anyway, the key insight into designing the algorithm as the authors put it is “if we knew the identity of the last segment…we could remove those points from consideration and recursively solve the problem on the remaining points.” So they get a recurrence relationny thingy like

OPT(j) = min {1<i<j} (e_{i, j} + C + OPT(i - 1)).

The algorithm is possible to run in O(n^2).

Readability 10/10 (I was surprised it was so easy to follow even though I skipped lecture)

6.4

This section introduces the knapsack problem, but first a scheduling problem where you want to pack as many jobs with given weights onto a machine so it's as busy as possible. They demonstrate that some intuitive greedy solutions don't work, and then try doing it the same way as they solved Weighted Interval scheduling–but that doesn't work out either because removing an job from consideration doesn't just narrow the set of remaining jobs–it also decreases the available weight. But all is not lost–you just have to add another variable to the OPT, and get a lot more subproblems–which you can iterate through, and you get a two-dimensional memoization array. The algorithm is O(nW), where n is the num of jobs and W is the amount of time you're trying to fill up.

The knapsack problem is a little different, because instead of just trying to fill up something to the max, each item you have has a value, and you're trying to maximize value. The subset sums is pretty much a special case of the knapsack problem where the value of every item is equal to its weight.

Readability: 7/10

courses/cs211/winter2012/journals/richard/chap6.txt · Last modified: 2012/03/28 03:22 by marmorsteinr
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0