Chapter 6: Dynamic Programming

This chapter focuses on dynamic programming, which is described as “living dangerously close to the edge of brute-force search” because it works on the exponentially large set of solutions, but without ever examining each one.

6.1: Weighted Interval Scheduling: A Recursive Procedure

We will first look back at the weighted interval problem. We will assume that the set is sorted, and then we simply run a short algorithm where we compute the greater of the value we are currently looking at plus the other jobs that can be done with it, or the value that we have for the previous job. This uses memoization, which improves our runtime greatly.

Memoization and running recursively over it is a huge part of dynamic programming, as it allows us to cut down and avoid running a brute-force algorithm. Overall, this section was good review and interesting, so I give it a 7/10.

6.2: Principles of Dynamic Programming: Memoization or Iteration over Subproblems

This section went over what we used in the previous section, discussing in further detail memoization and why it helps us and how it works. Three of the biggest parts about dynamic programming are: there are only a polynomial number of subproblems; the solution to the original problem is easily computer from the solutions of the subproblems; and there is a natural ordering on subproblems (i.e. smallest to largest) that allow us to determine the solution of a subproblem from the solutions of smaller subproblems.

This was really all this section covered. Short and to the point, you can never go wrong with that. 8/10.

6.3: Segmented Least Squares: Multi-way Choices

The least squares problem is where we want to find the best line of fit on a set of data points. To do this, we can use many different lines, but this comes at a cost for using more lines. So, to find out exactly how many lines we need that satisfies the best line of fit at the least cost, we use dynamic programming.

We do this very similarly to the last sections problem, finding an optimal solution using memoization and comparing the values, taking the one this time with the minimum value, as it is associated with a penalty now. Then, we go back through and look at these points, connecting a line through the ones that give us the least penalty.

The runtime of this will be O(n3), as it runs on O(n2) pairs and the formula runs in O(n). But, once all of the values have been computed, memoization allows this to run in O(n2).

This section was a good overview of the problem and helped me to grasp the topic better, so 8/10.

courses/cs211/winter2018/journals/shermanc/chapter6.txt · Last modified: by shermanc
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0