Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2014:journals:sam:home [2014/03/24 13:28] odellscourses:cs211:winter2014:journals:sam:home [2014/04/02 02:18] (current) – [Readability] odells
Line 273: Line 273:
  
 ====6.1==== ====6.1====
 +
 +Sometimes, there won't be a greedy solution to any given problem. Dynamic Programming can oftentimes look quite similar to brute-force search. It systematically works through the exponentially large set of possible solutions, BUT it does this without ever examining them all explicitly. We will build a dynamic-programming solution to the weighted interval scheduling problem.
 +
 +Our first solution is, unfortunately, exponential:
 +
 +Compute-Opt(j):
 +    if j=0
 +        Return 0
 +    
 +    else:
 +        Return max(Vj + Compute-Opt(p(j)), Compute-Opt(j-1)
 +    endif
 +    
 +It is exponential only because we are re-computing values multiple times; if we were to memoize the values (store them in an array as we compute them) then we can bring the running time down to O(n).
  
  
 ====6.2==== ====6.2====
  
 +Now we want to turn our recursive solution into an iterative one. It is important to note that we can directly compute the entries in M by an iterative algorithm, rather than using memoized recursion. We simply start with M[0]=0 and keep incrementing j; each time we need to determine a value M[j], it is provided by our previous algorithm. It looks like this:
 +
 +Iterative-Compute-Opt:
 +    M[0]=0
 +    For j = 1...n
 +        M[j] = max(Vj+M[p(j)], M[j-1])
 +    endfor
 +
 +It is also an O(n) solution.
 +
 +We can look at our solution to the weighted interval scheduling problem in a more general way to come up with a rough template for designing dynamic programming algorithms. To set about developing an algorithm based on dynamic programming, one need a collection of subproblems derived from the original problem that satisfies a few basic properties:
 + i. There are only a polynomial number of subproblems
 + ii. The solution to the original problem can be easily compute from the solutions to the subproblems.
 + iii. There is a natural ordering on subproblems from smallest to largest together with an easy-to-compute recurrence that allows one to determine the solution to a subproblem from the solutions to some number of subproblems. 
  
 ====6.3==== ====6.3====
  
 +The weighted-interval problem depends on a binary choice: that is, a given interval would either be included in the final solution or it would not. There are many more complicated problems that we can apply dynamic programming to, where the recurrence will involve "multi-way choices." At each step, we have a polynomial number of possibilities to consider.
  
 +Let's consider trying to find a line of best fit for a given set of data. If the points seem to lie cleanly along one line, we can easily compute the line of best fit. But data is not always so neat. What if we can see that the points would lie best against two or even three lines of best fit? We would then want to formulate a solution that requires us to fit the points well using as few lines as possible; we don't want to just connect each point with its own individual line. The is the "Segmented Least Squares" problem. 
 +
 +The best running time for the algorithm, as seen in the book, is O(n^2).
 ====6.4==== ====6.4====
  
 +For problems like the Knapsack problem, we will need to add an additional variable to the recurrence underling the dynamic program. The goal in these kinds of more general problems is to choose a subset of maximum total value, subject to the restriction that its weight not exceed W. A greedy solution that always finds the correct solution to this type of problem is not known, so we must turn to dynamic programming. For this problem, determining a good set of subproblems will be tricky. 
  
 +In order to find the value of Opt(n), we need to not only know the value of Opt(n-1), but also we need to know the best solution we can get using a subset of the first n-1 items and total allowed weight W - wn.
 +
 +The running time for the algorithm this creates, as seen in the book, is O(nW). Given a table M of the optimal values of the subproblems, the optimal set S can be found in O(n) time.
 ====Readability==== ====Readability====
 +Dynamic programming is pretty confusing! I had a hard time understanding these sections, especially the final one. The first two were pretty clear. 5/10 overall, though.
  
 +=====Summary of 7.1-7.2, 7.5, 7.7=====
 +
 +====7.1====
 +
 +Graphs are often used to model transportation networks, where edges carry some sort of traffic and nodes act as "switches" passing traffic between different edges. Network models like this have capacities on the edges and source nodes in the graph (that generate traffic) and sink nodes in the graph that are destinations that absorb traffic as it arrives. The traffics across these kinds of graphs is often referred to as "flow."
 +
 +A typical goal with this type of graph is to arrange the traffic so as to make as efficient use as possible of the available capacity. At the moment, there is no known algorithm for the Maximum-Flow Problem that could be considered a "dynamic programming" solution, so we'll return to greedy solutions.
 +
 +The Ford-Fulkerson Algorithm solves this problem. (page 344)
 +
 +It can be implemented in O(mC) time, so long as all the capacities in the flow network are integers.
 +====7.2====
 +
 +We will continue to analyze the Ford-Fulkerson algorithm and verify that it does indeed have the maximum possible value for any graph G.
 +
 +The Ford-Fulkerson algorithm terminates when the flow f has no s-t path in the residual graph of Gf. This turns out to be the only property needed for proving its maximality. (7.9, pg 348)
 +
 +Remember when we pointed out that capacities in our graph must be integers for the F-F algorithm? Well, if that's NOT the case, then it is entirely possible for the algorithm to run forever, if real numbers are incorporated.
 +====7.5====
 +
 +The Bipartite Matching Problem
 +
 +A matching in a graph G is a subset of the edges M in E such that each node appears in at most one edge in M. The Bipartite Matching problem is that of finding a matching in G of largest possible size.
 +
 +The Ford-Fulkerson algorithms can be used to find a maximum matching in a bipartite graph in O(mn) time.
 +====7.7====
 +Much of the power of the Maximum-Flow Problem lies in the fact that many problems with a nontrivial combinatorial search component can be solved in polynomial time because they can be reduced to the problem of finding a maximum flow or a minimum cut in a directed graph.
 +
 +Before, we had just one sink and one source. Now, we will have multiple! Now, instead of maximizing flow value, we will have sources that have a fixed supply and sinks that have a fixed demand, and our goal is to ship flow from nodes with available supply to those with given demands. 
 +====Readability====
 +4/10. This section is incredibly dense and hard to make sense of. I don't think the book does a great job of explaining overly technical aspects.
courses/cs211/winter2014/journals/sam/home.1395667699.txt.gz · Last modified: 2014/03/24 13:28 by odells
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0