Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
courses:cs211:winter2014:journals:johanna:home [2014/03/12 03:37] – goergenj | courses:cs211:winter2014:journals:johanna:home [2014/04/02 21:34] (current) – goergenj | ||
---|---|---|---|
Line 154: | Line 154: | ||
* Questions: The proof of this algorithm is a bit fuzzy to me. I would like to discuss it in class. | * Questions: The proof of this algorithm is a bit fuzzy to me. I would like to discuss it in class. | ||
* Rating: 7/10 this is a really cool algorithm. | * Rating: 7/10 this is a really cool algorithm. | ||
+ | |||
+ | ====== Chapter 6.1 Summary ====== | ||
+ | * This section deals with weighted interval scheduling. The input is a set of jobs with start and end times and values (or weights) and the goal is to accept the set of non-overlapping jobs with the greatest value possible. No greedy algorithm exists. So the idea is that we will design an algorithm based on the idea that for each job, we will either select it or not. If we do not select job j, we go on to consider job j-1. If we do select it, we add the value of j plus the optimal solution of the next non-overlapping set to our final solution. In order to pick whether or not we add job j, we simply recursively choose the maximum of these two options. The efficiency of this algorithm relies on memoizing our recursion. This means after the first time we compute the optimal value for a certain job, we store it in a globally accessible array and thus we will not have to calculate it each time we are calculating the optimal value for other jobs. Using memoization the compute-opt method becomes O(n). | ||
+ | |||
+ | |||
+ | ====== Chapter 6.2 Summary ====== | ||
+ | * Instead of defining our solution to the weighted intervals problem recursively, | ||
+ | * polynomial number of subproblems | ||
+ | * solutions to subproblems can be easily combined to solve overall problem | ||
+ | * subproblems can be naturally ordered from " | ||
+ | * Question: Why is it better to use the iterative approach than the recursive approach? Is it just for the sake of writing the algorithm or is there some type of running time tradeoff I am missing? | ||
+ | * Rating: 5/10 because it didn't give much detail on the reason for our new interest in iterative algorithms instead of recursive ones. | ||
+ | |||
+ | ====== Chapter 6.3 Summary ====== | ||
+ | * The next section focuses on problems where we do not just pick between choosing something or not but instead have to choose between some polynomial number of possibilities. These are called " | ||
+ | * We will try to create a line of best fit given a set of n points in the (x,y) plane. This problem is unique because instead of creating just one line to fit the entire set of points, often it looks like the set of points would be better fit by two lines. This will be accommodated by our algorithm. We will divide the line into partitions P, where P corresponds to a line segment S and for each P, we will calculate the penalty of the partition which is defined as the sum of the number of segments into which P is partitioned and the error value of the optimal line through that segment. The penalty will help us to use dynamic programming to calculate the optimal set of lines to fit our points. | ||
+ | |||
+ | ====== Chapter 6.4 Summary ====== | ||
+ | * The knapsack problem! | ||
+ | * Given a set of objects with weight and value, and a knapsack with a maximum weight W, how can we maximize the value of the items stored in the knapsack without putting too much weight in it? | ||
+ | * In order to solve for Opt(n), we will need to find out the value for Opt(n-1) and the Opt() of all subsets of the first n-1 items. Thus, we will use a 2D array to store the values for the optimal value in the bag per weight per num items added for each subset. This is a version of the memoization we used in previous algorithms to allow us not to have to recalculate and help us easily make comparisons throughout the algorithm. | ||
+ | |||
+ | |||
+ | ====== Chapter 7.1 Summary ====== | ||
+ | * This section deals with flow networks. These are networks that model traffic through " | ||
+ | * The residual graph is a tool we will use to implement an algorithm to solve the maximum-flow problem. The residual graph contains the same nodes as our original graph but its edges are different. Each forward edge (u,v) has a capacity of c< | ||
+ | * The augment function defined on page 342 traverses a path P turning it into a simple residual graph by decreasing the remaining capacity of each edge by bottleneck(P, | ||
+ | * Then the Ford-Fulkerson Algorithm uses this augment function and creates an entire residual graph and then uses it to calculate the maximum flow. It does so by running the augment function on every s-t path in the residual graph. | ||
+ | |||
+ | ====== Chapter 7.2 Summary ====== | ||
+ | * We will now use the fact that cuts provide very natural upper bounds on the values of flows. This is since the capacity of a cut (A,B) is the sum of capacities on all edges out of A. Then we will prove that v(f)= f< | ||
+ | * This helps us come to the conclusion that the maximum flow in the maximum-flow problem is equal to the minimum cut. |