We now use the algorithm for the previous problem to summarize the basic principles of dynamic programming. We developed a polynomial-time solution in the previous section by memoization of our initial exponential-time solution. It is helpful to formulate essentially an equivalent version of the algorithm. Designing the Algorithm. The array M is the key to the efficient algorithm. We can compute the values in M by an iterative algorithm rather than using a memoized recursion. Iterative algorithm on page 259. Analyzing. We can prove by induction on j that this algorithm writes OPT(j) in an array entry M[j]. The running time of Iterative-Compute-OPt is clearly O(n), since it explicitly runs for n iterations and spends constant time in each. A Basic Outline of Dynamic Programming. Both approaches clearly have a great deal of conceptual overlap, since they both grow from the insight contained in the recurrence. We shall use iterative building since it is often easier to express. To set about developing an algorithm based on dynamic programming, one needs a collection of subproblems derived from the original problem that satisfies a few basic properties:

  • There are only a polynomial number of subproblems
  • Combine solution to subproblems to solve original problem.
  • A natural ordering, from “smallest” to “largest”; together with an easy to compute recurrence that allows one to determine the solution to a subproblem from the solutions to some number of smaller subproblems.

It is never clear that a collection of subproblems will be useful until one finds a recurrence linking them together.

This section was short and not too hard to comprehend, thus a scale of 8.5/10.

courses/cs211/winter2014/journals/fred/6.2_principles_of_dynamic_programming_memoization_or_iteration_over_subproblems.txt · Last modified: 2014/03/25 11:54 by gisaf
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0