6.2. Principles of Dynamic Programming: Memoization or iteration over Subproblems


Designing the Algorithm


The key to the efficient solution found by Memoization is the array M that keeps track of the previously computed solutions. Once we have the array M, the problem is solved: M[n] contains the value of the optimal solution on the full instance and Find-Solution can be used to trace back through M efficiently and return an optimal solution.
Algorithm

Iterative-Compute-OPT
M[0] = 0

For j = 1,2,…,n
M[j] = max(vj + M[p(j)],M[j-1])

Endfor


Analyzing the Algorithm


  • It is easily proved by induction on j that the above algorithm writes OPT(j) in array entry M[j](See previous section in the book for the proof). We can then pass the filled-in array M to Find-Solution(shown in the previous section) and get the optimal solution in addition to its value.
  • The running time of Iterative-Compute-OPT is O(n) as it runs for n iterations spending O(1) time on each iteration.

Basic outline of Dynamic Programming

Polynomial number of subproblems

Solution to original problem can be easily computed from solutions to subproblems

Natural ordering of subproblems, easy to compute recurrence

This section was both short and straightforward. I give it an 8/10

courses/cs211/winter2012/journals/jeanpaul/chaptersixsectionii.txt · Last modified: 2012/03/28 01:37 by mugabej
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0