| Both sides previous revisionPrevious revisionNext revision | Previous revision |
| courses:cs211:winter2018:journals:melkersonr:chapter3 [2018/02/06 22:22] – [Section 3.3] melkersonr | courses:cs211:winter2018:journals:melkersonr:chapter3 [2018/02/06 22:33] (current) – [Section 3.6] melkersonr |
|---|
| ===== Section 3.4 ===== | ===== Section 3.4 ===== |
| |
| * **Summary & Motivations:** Section 3.4 is Testing Bipartiteness: An Application of Breadth-First Search. A graph is bipartite if the nodes can be partitioned into two sets, call them X and Y, where every edge connects one node in X and one node in Y. The problem presented in this chapter is figuring out a way to determine if a graph is bipartite. Theorem **3.14** states that a graph with an odd cycle cannot be bipartite, but the authors ask, "Are there other, more complex obstacles to bipartiteness?" (p 95). It turns out, there is an algorithm for this question. | * **Summary & Motivations:** Section 3.4 is Testing Bipartiteness: An Application of Breadth-First Search. A graph is bipartite if the nodes can be partitioned into two sets, call them X and Y, where every edge connects one node in X and one node in Y. The problem presented in this chapter is figuring out a way to determine if a graph is bipartite. Theorem **3.14** states that a graph with an odd cycle cannot be bipartite, but the authors ask, "Are there other, more complex obstacles to bipartiteness?" (p 95). |
| * **About the Algorithms:** The algorithm for determining whether or not a graph is bipartite involves coloring the starting node red, its neighbors blue, its neighbors' neighbors red, and so on in layers. If we end up with a valid coloring, the graph is bipartite; if we do not, the graph is not. At the implementation level, all we have to do it modify the BFS algorithm. We add an array Color over the nodes, then in BFS when "we are adding a node v to a list L[i+1], we assign Color[v] = red if i + 1 is an even number, and Color[v] = blue if i + 1 is an odd number" (p 96). At the end, we scan the edges and determine if any edge has both ends in the same color. The authors analyze this algorithm to show that it both correctly determines if a graph is bipartite and "shows that we can find an odd cycle in G whenever it is not bipartite" (p 96). | * **About the Algorithms:** The algorithm for determining whether or not a graph is bipartite involves coloring the starting node red, its neighbors blue, its neighbors' neighbors red, and so on in layers. If we end up with a valid coloring, the graph is bipartite; if we do not, the graph is not. At the implementation level, all we have to do is modify the BFS algorithm. We add an array Color over the nodes, then in BFS when "we are adding a node v to a list L[i+1], we assign Color[v] = red if i + 1 is an even number, and Color[v] = blue if i + 1 is an odd number" (p 96). At the end, we scan the edges and determine if any edge has both ends in the same color, which would be an invalid coloring. The authors analyze this algorithm to show that it both correctly determines if a graph is bipartite and "shows that we can find an odd cycle in G whenever it is not bipartite" (p 96). |
| * **My Questions:** Not a question, but I wish the authors were more explicit when giving the runtime of algorithms. On page 96, they say, "At the end of this procedure, we simply scan all the edges and determine whether there is any edge for which both ends received the same color. Thus, the total running time for the coloring algorithm is O(m+n), just as it is for BFS." I wish they would break the process down a little more and say where the components of the runtime come from. | * **My Questions:** Not a question, but I wish the authors were more explicit when giving the runtime of algorithms. On page 96, they say, "At the end of this procedure, we simply scan all the edges and determine whether there is any edge for which both ends received the same color. Thus, the total running time for the coloring algorithm is O(m+n), just as it is for BFS." I wish they would break the process down a little more and say where the components of the runtime come from. |
| * **Second Time Around:** I could have sworn we talked in class about doing BFS first, and //then// coloring the layers that were produced in order to implement the algorithm to determine a graph's bipartiteness. I was uncomfortable with that idea. I wanted to just do everything at the same time. After reading, though, it sounds like that's what we're supposed to do because we add a Color array and assign nodes colors as we go. Makes more sense now. | * **Second Time Around:** I could have sworn we talked in class about doing BFS first, and //then// coloring the layers that were produced in order to implement the algorithm to determine a graph's bipartiteness. I was uncomfortable with that idea. I wanted to just do everything at the same time. After reading, though, it sounds like that's what we're supposed to do because we add a Color array and assign nodes colors as we go. It makes more sense now. |
| * **Note to Self:** It's possible to build off of algorithms we already know. Here, we're starting with BFS and modifying it to answer our question about a graph's bipartiteness. | * **Note to Self:** It's possible to build off of algorithms that we already know. Here, we're starting with BFS and modifying it to answer our question about a graph's bipartiteness. |
| * **Readability:** 8 - pretty easy read, but like I said in "My Questions," I wish the authors would break things down a little more. | * **Readability:** 8 - pretty easy read, but like I said in "My Questions," I wish the authors would break things down a little more. |
| |
| ===== Section 3.5 ===== | ===== Section 3.5 ===== |
| |
| * **Summary:** Section 3.5 is Connectivity in Directed Graphs. We recall that directed graphs have edges with directionality. An edge (u,v) is said to go from node u to node v. There is not necessarily also an edge (v,u) from v to u. Even though our edges now have a direction, we can still talk about BFS and DFS. First, the matter of representing a directed graph. We still us an adjacency list, but now we have two lists for each node: one for the nodes //to// which it has edges, another for the nodes //from// which it has edges. BFS and DFS work almost exactly the same for directed graphs as for undirected graphs, except that we follow edges based on their directionality. For BFS, we follow the edges out of nodes and ignore the edges in. For DFS, "when DFS is at a node u, it recursively launches a depth-first search, i order, for each node to which u has an edge" (p 98). The last subsection is about strong connectivity, which means that for every two nodes in the graph, there is a path between them in both directions (every pair of nodes is "mutually reachable"). We can determine in linear time whether a graph is strongly connected: run BFS on G and on G_{rev}, and if either of the searches "fails to reach every node, then clearly G is not strongly connected" (p 98). Finally, say the graph is not strongly connected. We can find the strong component by taking the intersection of the nodes reached by BFS on G and G_{rev}. | * **Summary:** Section 3.5 is Connectivity in Directed Graphs. We recall that directed graphs have edges with directionality. An edge (u,v) is said to go from node u to node v. There is not necessarily also an edge (v,u) from v to u. Even though our edges now have a direction, we can still talk about BFS and DFS. First, the matter of representing a directed graph. We still us an adjacency list, but now we have two lists for each node: one for the nodes //to// which it has edges, another for the nodes //from// which it has edges. BFS and DFS work almost exactly the same for directed graphs as for undirected graphs, except that we follow edges based on their directionality. For BFS, we follow the edges out of nodes and ignore the edges in. For DFS, "when DFS is at a node u, it recursively launches a depth-first search, in order, for each node to which u has an edge" (p 98). The last subsection is about strong connectivity, which means that for every two nodes in the graph, there is a path between them in both directions (every pair of nodes is "mutually reachable"). We can determine in linear time whether a graph is strongly connected: run BFS on G and on G_{rev}, and if either of the searches "fails to reach every node, then clearly G is not strongly connected" (p 98). Finally, say the graph is not strongly connected. We can find the strong component by taking the intersection of the nodes reached by BFS on G and G_{rev}. |
| * **My Questions:** Is there such a thing as bipartiteness for a directed graph? | * **My Questions:** Is there such a thing as bipartiteness for a directed graph? |
| * **Second Time Around:** At first, I thought that you might end up adding nodes multiple times to a BFS tree of a directed graph or have some weird structure other than a tree if a later node goes back to an earlier node. Then I recalled that the BFS algorithm prescribes adding a node to a layer only if it not already Discovered. | * **Second Time Around:** At first, I thought that you might end up adding nodes multiple times to a BFS tree of a directed graph or have some weird structure other than a tree if a later node goes back to an earlier node. Then I recalled that the BFS algorithm prescribes adding a node to a layer only if it not already Discovered. |
| |
| * **Summary & Motivations:** Section 3.6 is Directed Acyclic Graphs and Topological Ordering. A directed acyclic graph (DAG), is a directed graph with no cycles. A common example of a DAG is a dependency network - like for course prerequisites, or a list of tasks where some must be done before others or the output of some is the input of others. We represent such relationships by giving each task a node, and for each instance where task i must be done before task j we have the directed edge (i,j). Presented with a DAG that represents dependencies, it's natural to ask for a valid ordering of the tasks. Such an ordering is called a topological ordering. Theorem **3.18** says that "If G has a topological ordering, then G is a DAG" (p 101). We now ask whether every DAG has a topological ordering and if it does how we can find it. | * **Summary & Motivations:** Section 3.6 is Directed Acyclic Graphs and Topological Ordering. A directed acyclic graph (DAG), is a directed graph with no cycles. A common example of a DAG is a dependency network - like for course prerequisites, or a list of tasks where some must be done before others or the output of some is the input of others. We represent such relationships by giving each task a node, and for each instance where task i must be done before task j we have the directed edge (i,j). Presented with a DAG that represents dependencies, it's natural to ask for a valid ordering of the tasks. Such an ordering is called a topological ordering. Theorem **3.18** says that "If G has a topological ordering, then G is a DAG" (p 101). We now ask whether every DAG has a topological ordering and if it does how we can find it. |
| * **About the Algorithms:** An algorithm to answer our question above does exist. Finding that topological ordering involves first selecting a node with no incoming edges, for we know that it doesn't depend on the completing of any other task. We know that there is always at least one node with no incoming edges because otherwise there would be a cycle in the graph and it wouldn't be a DAG. Furthermore, every DAG does have a topological ordering, which the authors prove by induction. Once we find this node that has no incoming edges, we delete it from the graph. Doing so does only removes edges, rather than adding them, so no cycle could be created and the graph is still a DAG. We continue the process of finding a node with no incoming edges and deleting that node, adding that node after the one before, until all nodes are deleted. We end up with a topological ordering for the DAG, but there could be multiple valid orderings. This algorithm is O(n^2) without any cleverness in finding a node with no incoming edges. If we're clever, though, we can get O(n+m) runtime. We facilitate this cleverness by maintaining knowledge about which nodes haven't been deleted yet ("active" nodes): for each node, the number of incoming edges from active nodes; and a set of all active nodes that have no incoming edges from other active nodes. Then we simply select a node to delete from that set, go through the nodes to which it had an edge, and subtract one from its value for the number of active incoming nodes. If that number drops to zero, we add that node to the set. | * **About the Algorithms:** Finding a topological ordering involves first selecting a node with no incoming edges, for we know that it doesn't depend on the completion of any other task. We know that there is always at least one node with no incoming edges because otherwise there would be a cycle in the graph and it wouldn't be a DAG. Furthermore, every DAG does have a topological ordering, which the authors prove by induction. Once we find this node that has no incoming edges, we delete it from the graph. Doing so does only removes edges, rather than adding them, so no cycle could be created and the graph is still a DAG. We continue the process of finding a node with no incoming edges and deleting that node, adding that node after the one before, until all nodes are deleted. We end up with a topological ordering for the DAG, but there could be multiple valid orderings. This algorithm is O(n^2) without any cleverness in finding a node with no incoming edges. If we're clever, though, we can get O(n+m) runtime. We facilitate this cleverness by maintaining knowledge about which nodes haven't been deleted yet ("active" nodes): for each node, maintain the number of incoming edges from active nodes; and a set of all active nodes that have no incoming edges from other active nodes. Then we simply select a node to delete from that set, go through the nodes to which it had an edge, and subtract one from its value for the number of active incoming nodes. If that number drops to zero, we add that node to the set. |
| * **My Questions:** | * **My Questions:** Is there a reason to talk about directed cyclic graphs? |
| * **Second Time Around:** We weren't able to cover the part where we improve the runtime for finding a topological ordering form O(n^2) to O(n+m) in class. It's a good example of first finding a brute force solution (and a solution that works), and then improving it. | * **Second Time Around:** We weren't able to cover the part where we improve the runtime for finding a topological ordering form O(n^2) to O(n+m) in class. It's a good example of first finding a brute force solution (and a solution that works), and then improving it. |
| * **Note to Self:** If a graph depicting dependencies contained a cycle C, "there would be no way to do any of the tasks in C: since each task in C cannot begin until some other one complete, no task in C could ever be done, since none could be done first" (p 100). That makes sense, but I'm not sure I would have thought of it without being asked, "If there were a cycle, could any of the tasks in the cycle be completed?" | * **Note to Self:** If a graph depicting dependencies contained a cycle C, "there would be no way to do any of the tasks in C: since each task in C cannot begin until some other one complete, no task in C could ever be done, since none could be done first" (p 100). That makes sense, but I'm not sure I would have thought of it without being asked, "If there were a cycle, could any of the tasks in the cycle be completed?" |
| * **Readability:** | * **Readability:** 9 - I thought this section was pretty straightforward. |