x
value.My journal entries for the “Algorithm Design” textbook by Jon Kleinberg and Éva Tardos.
Readability Score | 8 / 10 | |
Interest Score | 9 / 10 |
A flow network is a specific kind of graph that has a node that is the source of some sort of traffic, directed edges indicating the maximum amount of allowed traffic between two nodes, and a node that is the destination of all of the traffic. The amount of traffic flow coming from the source node must equal the amount of traffic flow leaving the graph through the destination node. Additionally, the amount of flow into and out of any node must be conserved such that the amount of flow into a node is equal to the flow out of a node.
Given a problem that can be modeled with a flow network, we typically want to find the maximum flow of the graph. We use the Ford-Fulkerson algorithm to find the maximum flow of a network flow graph. It uses a residual graph and a growing set of examined nodes to progress toward the optimal solution of the network flow graph.
When finding the maximum flow of a network flow graph, it is useful to find the bottleneck of the graph to know the best-case of flow. This is done by considering the nodes in the graph to be separated into two major groups, called a cut, and summing up all of the edge capacities between the two groups. Since the flow has to go through these edges (having no other path to the destination node), we know that it is a best-case scenario that the flow of the graph be equal to the minimum possible capacity through any cut.
A bipartite graph is a graph in which there are two sets of nodes, or “sides” of the graph, that are not connected to each other but are connected to the other set. A node in one side of the graph can be connected to more than one node on the other side of the graph.
The bipartite matching problem is a problem of finding the largest set of edges with distinct endpoints in a bipartite graph. This problem can be solved by adding a source node for the left side of the graph, link it to all of the nodes in the left side of the graph, add a destination node for the right side of the graph, link it to all of the nodes on the right side of the graph, and set all of the edges to be directed edges (pointing to the right, towards the destination node) to have a capacity of 1. Once you have this, you can run the Ford-Fulkerson algorithm on the network graph to find the largest set of edges with distinct endpoints.
There are a lot of interesting problems that are based on the maximum-flow problem. One such problem is a network flow in which each node also has its own supply and demand. A node with a supply is a node that brings a certain amount flow traffic into the graph. A node with a demand is a node that acts as a destination node for a certain amount of flow traffic out of the graph.
Another problem is a network flow in which each edge has a lower bound in addition to its usual upper bound (capacity). In this kind of problem, simply add the lower bound of an edge to the demand of both of its endpoint nodes.
Readability Score | 6 / 10 | |
Interest Score | 8 / 10 |
The weighted interval scheduling problem is a lot like the unweighted interval scheduling problem except that each task to be scheduled has a priority or weight to it and we want to schedule jobs such that we have the largest possible weight. With so many variables and components to consider and take into account, a greedy algorithm isn't enough. We need something that is more powerful than greedy algorithms but more efficient than brute-force algorithms.
We use recursion all the time without even thinking about it. Sometimes, however, we do more work than we have to when we come across the same call in different recursive call branches of execution. Instead of recomputing those results, we can use memoization to remember the value of previously calculated results. Every time the recursive function is called, we check the call against a memoization table. If there is no stored result for that call, we run the recursive call and store its result in the memoization table. If there is a stored result, simply return that instead of running the call again.
Instead of memoization, we can also use an algorithm that iterates over subproblems. In this case, we solve the anticipated subproblems and then build up a solution from those. Typically, we have to do some post-processing or code tracing to find the solution steps that led to us attaining the solution's value that we got after the iteration.
Given a plot of points on an xy-plane, find the segments of best fit that minimizes the function. We want a solution that has a minimum amount of error while also having the least number of line segments (otherwise n-1
line segments would always be the optimal solution). We're going to use the tradeoff function E + cL
, for some constant c > 0
where E
is the sum of sums of the squared errors in each setment and L
is the number of line segments, as our metric for an optimal solution.
To find the solution, our insight is that if we iterate through the points (starting at the last point1)) and make a multiway decision to include a point in any of a number of line segments.
The knapsack problem is a problem in which we have n
items, each of which has a weight (w>0
) and a value (v>0
). If we have a knapsack with a capacity of W
kilograms, how can we fill the knapsack so we can maximize the total value?
We go about solving this problem much like we did with the weighted interval scheduling problem in that we have to make a binary decision to include an item in the solution or not. Unlike weighted interval scheduling, however, items with equal or less weight do not conflict with each other, so we can't stop our algorithm just because we choose an item, we have to keep going to make sure we consider all reasonable combinations. We use a weight limit to designate the amount of weight we can still add to the knapsack after adding previous items. The ongoing weight limit to determine whether or not to include an item.
Week 7 | 2023/02/05 16:59 | Garrett Koller | 0 Comments |
Week 3 | 2023/01/27 15:29 | Garrett Koller | 0 Comments |
Week 4 | 2023/01/27 03:55 | Garrett Koller | 0 Comments |
Week 8 | 2023/01/26 17:32 | Garrett Koller | 0 Comments |
Week 5-6 | 2023/01/26 15:42 | Garrett Koller | 0 Comments |
Week 9 | 2023/01/15 19:47 | Garrett Koller | 0 Comments |
Week 10 | 2023/01/15 19:47 | Garrett Koller | 0 Comments |
Week 1 | 2023/01/15 19:47 | Garrett Koller | 0 Comments |
Week 2 | 2023/01/13 17:26 | Garrett Koller | 0 Comments |
x
value.