Table of Contents
Chapter 6
6.1: Weighted Interval Scheduling: A Recursive Procedure
- More general version of the greedy algorithms we worked on before.
- Dynamic programming solution: a recurrence equation that expresses the optimal solution (or its value) in terms of the optimal solutions to smaller sub-problems
- Memoization:
- Saving values that have already been computed to reduce run time.
- Analysis on 257
6.2: Principles of Dynamic Programming: Memoization or Iteration over Subproblems
- Iterating over subproblems instead of computing solutions recursively
- Deals with using the array M from the Memoization/Recursion answer
- We can directly compute the entries in M by an iterative algorithm, rather than using memoized recursion.
- Analysis on 259.
- Second approach to dynamic programming: iterative building up of subproblems
- Subproblems for this approach my satisfy the following properties:
- There are only a polynomial number of subproblems
- The solution to the original problem can be easily computed from the solutions to the subproblems
- There is a natural ordering on the subproblems from “smallest” to “largest” together with an easy-to-compute recurrence that allows one to determine the solution to a subproblem from the solutions to some number of smaller subproblems.
6.3: Segmented Least Squares: Multi-way Choices
- Multi-way choices instead of binary choices
- Deals with plotting lines between points
- Penalty of a partition:
- The number of segments into which we partition P, times a fixed, given multiplier C>0 plus the error value of the optimal line through each segment
- Design and analysis of this segmented least squares problem can be found from 264-266
6.4: Subset Sums and Knapsacks: Adding a Variable
- Given a set of items, each with a given weight w and a bound for how much we can carry W
- Knapsack problem: Find a set of items that maximizes value and weight.
- Creation and analysis of the optimal algorithm for the knapsack problem begins on page 269 through page 271
- Knapsack problem can be solved in O(nW) time where n is the number of items that can be put in the sack and W is the weight
Final Thoughts (End of Chap 5, Beg of 6)
This chapter is a little bit more easily understood than last weeks chapter. All in all, the knapsack problem is very intuitive and so is the idea of dynamic programming. Readability: 7/10
6.5: RNA Secondary Structure: Dynamic Programming over Intervals
- Adding a second variable to consider a subproblem for every contiguous interval in {1,2,…,n}
- RNA secondary structure prediction is a great example of this problem.
- Secondary structure occurs when RNA loops back and forms pairs with itself.
- Design of an algorithm to predict secondary structure of RNA can be found from 275-278
6.6: Sequence Alignment
- How do we define similarity between two words or strings?
- Strings can also arise in Biology - chromosomes
- Whole field of computational biology that deals with this
- First - parameter that defines a gap penalty
- Second- for each pair of letters p,q in the alphabet there is a mismatch cost for lining up p with q.
- The cost of alignment M is the sum of its gap and mismatch costs.
- Design of this algorithm starts on page 281
6.7: Sequence Alignment in Linear Space via Divide and Conquer
- Must get around the O(mn) space requirement
- This chapter covers making it work in O(mn) time using O(m + n) space
- Page 285-290 covers the design and analysis of this algorithm
6.8: Shortest Paths in a Graph
- This is the section I understand the most.
- Deals with finding the shortest path in a graph with negative edges.
- Minimum - Cost Path Problem and the Shortest-Path Problem
- Negative cycles can be seen as good arbitrage opportunities
- We can modify Dijkstra's Algorithm with some dynamic programming to create a solution to this problem.
- The design, analysis, and implementation of this algorithm starts on page 291
6.5-6.8 Final Words
This is a section that I did not understand too particularly well both in the book and in class. The shortest path portion was probably my strongest area but I'm struggling a little bit with this material. Glad I did this journal on Monday so I know to get some extra help on this material before the problem set is due on Friday. Readability: 5/10
6.9: Shortest Paths and Distance Vector Protocols
- Shortest Paths algorithm can be applied to routers in a communication network to determine the most efficient path.
- Nodes are routers and edges are direct paths between these routers.
- Find minimum delay from a source node s to a destination node t.
- Cannot use Dijkstra's because it requires global knowledge.
- Bellman-Ford give us the best option
- Use a “push-based” algorithm rather than the “pull-based” algorithm of the original Bellman-Ford
- This pushed based method can be seen on page 298 and an Asynchronous version can be found on page 299\
- Problems:
- Assumes edge costs will remain constant
- Can cause counting to infinity
- For this reason path vector protocols are better than distance vector protocols