Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
courses:cs211:winter2014:journals:cory:home [2014/03/02 18:30] – walkerc | courses:cs211:winter2014:journals:cory:home [2014/03/31 21:51] (current) – walkerc | ||
---|---|---|---|
Line 92: | Line 92: | ||
===== 5.2 Summary ===== | ===== 5.2 Summary ===== | ||
A more general class of algorithms is obtained by considering divide-and-conquer algorithms that create recursive calls on q subproblems of size n/2 each and then combine the results in O(n) time. 1) For some constant c, T(n) ≤ qT(n/2) + cn when n >2, and T(2) ≤ c. Any function T(*) satisfying 1) with q > 2 is bounded by O(n^(log_2 * q)). So, the running time is more than linear, since log_2 * q > 1, but still polynomial in n. Any function T(*) satisfying 1) with q = 1 is bounded by O(n). Furthermore, | A more general class of algorithms is obtained by considering divide-and-conquer algorithms that create recursive calls on q subproblems of size n/2 each and then combine the results in O(n) time. 1) For some constant c, T(n) ≤ qT(n/2) + cn when n >2, and T(2) ≤ c. Any function T(*) satisfying 1) with q > 2 is bounded by O(n^(log_2 * q)). So, the running time is more than linear, since log_2 * q > 1, but still polynomial in n. Any function T(*) satisfying 1) with q = 1 is bounded by O(n). Furthermore, | ||
+ | |||
+ | ===== 5.3 Summary ===== | ||
+ | Another problem that can be solved with divide-and-conquer algorithms is a problem that arises in the analysis of rankings. For example, many sites on the Web make use of a technique known as collaborative filtering, where they try to match people' | ||
+ | |||
+ | ===== 5.4 Summary ===== | ||
+ | Finding the closest pair of points means that, given n points in the plane, we want to find the pair that is closest together. This problem is found in computational geometry, and is useful in graphics, computer vision, geographic information systems, molecular modeling, etc. Given points P = {p_1, ..., p_n}, our goal is to find a pair of points p_i, p_j that minimizes the distance d(p_i, p_j). This is solved with another divide and conquer algorithm, like Mergesort: find the closest pair among the points in the "left half" of P and the closest pair among the points in the "right half" of P. Use this information to get the overall solution in linear time. Overall, this gives O(nlogn) running time, and correctly outputs a closest pair of points in P. Question: How exactly DO we recombine at the end in linear time? Interest: 3/10 | ||
+ | |||
+ | ===== 5.5 Summary ===== | ||
+ | Integer multiplication can be done with a different divide and conquer algorithm, in which the " | ||
+ | |||
+ | ===== 6.1 Summary ===== | ||
+ | The Interval Scheduling Problem can be solved for an optimal solution with a particular greedy algorithm, but the Weighted Interval Scheduling Problem requires a more general version, where each interval has a value or weight, and we want to accept a set of maximum value. With weighted intervals, no natural greedy algorithm is known; instead, we switch to dynamic programming. In particular, a recursive type of algorithm is used to solve this problem, where each request is said to belong to an optimal solution on the given set if and only if v_j + OPT(p(j)) >= OPT(j-1), where OPT(x) is the value of the optimal solution. In the process of using this recursion to calculate the optimal solution, we must save values that have already been computed, which is called memoization. The running time of the complete algorithm with the memoization is O(n). Question: How does this algorithm avoid delving through all possibilities, | ||
+ | |||
+ | ===== 6.2 Summary ===== | ||
+ | The key to the efficient algorithm used for the weighted interval scheduling problem is the array M. This array is built by using the value of optimal solutions to the subproblems on intervals 1 through j for each j, using the previous algorithm to define each value of M[j] based on values that come earlier in the array. Once we have the array M, the problem is solved, because it contains the value of the optimal solution on the full instance, and our Find-Solution algorithm can be used to backtrack through M and return the final, optimal solution. Dynamic programming algorithms can be developed using an iterative building up of subproblems; | ||
+ | |||
+ | ===== 6.3 Summary ===== | ||
+ | In the segmented Least Squares problem, the recurrence we use will involve what could be called " | ||
+ | |||
+ | ===== 6.4 Summary ===== | ||
+ | In the subset sums and knapsacks problem, the " | ||
+ | |||
+ | ===== 7.1 Summary ===== | ||
+ | Graphs can often be used to model transportation networks, whose edges carry " | ||
+ | |||
+ | ===== 7.2 Summary ===== | ||
+ | This section is about the analysis of the Ford-Fulkerson Algorithm, and how it provides insight into the Maximum-Flow Problem. Here we see and prove that the Ford-Fulkerson Algorithm has the maximum possible value of any flow in G. By watching the amount of flow f (any s-t flow) sends across a cut, we can measure the flow value. Let f be any s-t flow and (A, B) any s-t cut. Then v(f) ≤ c(A, B). If f is an s-t-flow such that there is no s-t path in the residual graph G_f, then there is an s-t cut (A*, B*) in G for which v(f) = c(A*, B*). Consequently, | ||
+ | |||
+ | ===== 7.5 Summary ===== | ||
+ | One of the main goals of the Maximum-Flow Problem is to be able to solve the Bipartite Matching Problem. A bipartite graph G=(V, E) is an undirected graph whose node set can be partitioned so that every edge has one end in X and the other in Y. A matching M in G is a subset of the edges so that each node appears in at most one edge in M. The Bipartite Matching Problem involves finding a matching in G of largest possible size. The size of the maximum matching in G is equal to the value of the maximum flow in G'; and the edges in such a matching in G are the edges that carry flow from X to Y in G'. The Ford-Fulkerson Algorithm can be used to find a maximum matching in a bipartite graph in O(mn) time. Assume that the bipartite graph G = (V, E) has two sides X and Y such that |X| = |Y|. Then the graph G either has a perfect matching or there is a subset A⊆X such that |Γ(A)| < |A|. A perfect matching or an appropriate subset A can be found in O(mn) time. Question: What real world applications do bipartite graphs have? Interest: 4/10 | ||
+ | |||
+ | ===== 7.7 Summary ===== | ||
+ | In the Maximum-Flow Problem, we had only a single source s and a single sink t. Now we can have a set S of sources and a set T of sinks. Instead of maximizing the flow value, we will have sources that have fixed supply values and sinks with fixed demand values, and we want to ship flow from nodes with supply to those with demands. There is a feasible circulation with demands {d_v} in G if and only if the maximum s* - t* flow in G' has value D. If all capacities and demands in G are integers, and there is a feasible circulation, | ||
+ | |||
+ |