Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2018:journals:nasona:chapter3 [2018/02/04 14:26] – [3.5 Connectivity in Directed Graphs] nasonacourses:cs211:winter2018:journals:nasona:chapter3 [2018/02/06 15:19] (current) – [3.2 Graph Connectivity and Traversal] nasona
Line 54: Line 54:
  
 ==Summary== ==Summary==
 +There are two ways of identifying node-to-node connectivity: breadth-first search and depth-first search. Breadth-first search explores nodes from s out in all possible directions and adds nodes to the resulting BFS tree layer by layer. Depth-first search starts at node s and you follow one edge out to another node and an edge from that node outward and so on. When you hit a dead end you backtrack to a node that has an edge to a node that is not yet explored. The resulting BFS tree is wider, shallower, and bushier than the DFS search tree, which is longer and spindly. Moreover, the resulting theorem holds true, for any two nodes s and t in a graph, their connected components are either identical or disjointed.
 ==Notes== ==Notes==
   * Node-to-node connectivity   * Node-to-node connectivity
Line 96: Line 96:
       Endfor       Endfor
  
-  * To apply this to s-t connectivity, we simply declare all nodes initially to not explored and inoke DFS(s).+  * To apply this to s-t connectivity, we simply declare all nodes initially to not explored and invoke DFS(s).
   * The similarities between BFS and DFS are because they both build the connected component containing s, and they achieve qualitatively similar levels of efficiency.   * The similarities between BFS and DFS are because they both build the connected component containing s, and they achieve qualitatively similar levels of efficiency.
   * Although both BFS and DFS yield a rooted tree T on the component containing s, the trees will have very different structures. BFS trees are widespread and bushy with layers. DFS trees are spindly, narrow, and deep.   * Although both BFS and DFS yield a rooted tree T on the component containing s, the trees will have very different structures. BFS trees are widespread and bushy with layers. DFS trees are spindly, narrow, and deep.
Line 108: Line 108:
   * A natural algorithm for producing all the connected components of a graph, by growing them one component at a time. We start with an arbitrary node s, and use BFS or DFS to generate its connected component.   * A natural algorithm for producing all the connected components of a graph, by growing them one component at a time. We start with an arbitrary node s, and use BFS or DFS to generate its connected component.
  
 +==Additional Information==
 +Initially, I was confused by the following claim: for any two nodes s and t in a graph, their connected components are either identical or disjointed. But, after thinking it over, it becomes blatantly obvious. If there is a path from s to t, then their connected components must be identical because it does not matter what node you start your BFS or DFS search at in a given connected component, it will always yield the same components within their respective trees. And, conversely, if there is not a path between two nodes, then there can be no nodes that their BFS or DFS trees have in common because then they would be connected components. Therefore, their components must be disjointed.
  
-==Questions== 
- 
-==Additional Information== 
 On a scale of 1 to 10, this section was an 8 in terms of readability. The differences between BFS and DFS were explained very well, and the two given algorithms were laid out very explicitly. The only reason this section was not a 10 was because, as usual, it takes me a couple of reads to fully understand the theorems (and what they mean in normal person language) and their proofs. On a scale of 1 to 10, this section was an 8 in terms of readability. The differences between BFS and DFS were explained very well, and the two given algorithms were laid out very explicitly. The only reason this section was not a 10 was because, as usual, it takes me a couple of reads to fully understand the theorems (and what they mean in normal person language) and their proofs.
 =======3.3 Implementing Graph Traversal======= =======3.3 Implementing Graph Traversal=======
  
 ==Summary== ==Summary==
 +There are two ways to represent graphs for algorithm implementation: an adjacency matrix and an adjacency list. Overall, for our case, the adjacency list is a more efficient choice for algorithm implementation -- including BFS and DFS algorithms. To implement BFS, we will use an adjacency list, maintain an array that depicts which nodes have been discovered, and lists to represent the layers. The BFS algorithm that results from this implementation runs in O(m + n) time. BFS can also be implemented using a queue, but that version does not improve upon the runtime for the implementation we already have. DFS is recursive, and it maintains the nodes waiting to be processed in a stack. We will also maintain an array that depicts which nodes have been explored. The resulting runtime from this implementation of DFS is O(n + m). Moreover, BFS and DFS algorithms never see nodes that are disjointed from the starting node.
  
 ==Notes== ==Notes==
Line 194: Line 194:
   * The algorithm only spends a constant amount of work on a given edge or node in the iteration when the connected component it belongs to is under consideration. The total runtime is O(n + m).   * The algorithm only spends a constant amount of work on a given edge or node in the iteration when the connected component it belongs to is under consideration. The total runtime is O(n + m).
  
-==Questions==+==Additional Information== 
 +The difference between a node being "explored" and a node being "discovered" was pretty confusing to me, especially because in class no clear distinction between the two was made. When a node is discovered, it is the first time the node is seen. An edge found after a previous node was explored allowed us to see, and ultimately, discover this new node. Dissimilarly, when a node is explored, it is when al incident edges to that given node are scanned, and potentially yield new nodes for the connectivity algorithm to explore.
  
-==Additional Information== 
 On a scale of 1 to 10, this section was a 7 in terms of readability. I was kind of confused on why the section was called 3.3 Implementing Graph Traversal Using Queues and Stacks when the BFS was not initially explained using a queue. The queue version of the implementation felt kind of like an after thought, especially because the queue versus the regular list implementation did not really make any improvements in terms of runtime. Maybe they did this to explicitly show the difference between BFS and DFS, but nonetheless it seemed like an afterthought. Besides that, the rest of the section was extremely well written and explained very clearly, especially the DFS implementation with stacks. On a scale of 1 to 10, this section was a 7 in terms of readability. I was kind of confused on why the section was called 3.3 Implementing Graph Traversal Using Queues and Stacks when the BFS was not initially explained using a queue. The queue version of the implementation felt kind of like an after thought, especially because the queue versus the regular list implementation did not really make any improvements in terms of runtime. Maybe they did this to explicitly show the difference between BFS and DFS, but nonetheless it seemed like an afterthought. Besides that, the rest of the section was extremely well written and explained very clearly, especially the DFS implementation with stacks.
- 
 =======3.4 Bipartite Graphs: An Application of BFS======= =======3.4 Bipartite Graphs: An Application of BFS=======
  
 ==Summary== ==Summary==
 +A bipartite graph is one where the nodes can be broken up into two sets in a way that every edge has one end in one set and the other end in the other. If a graph is bipartite, then it cannot contain an odd cycle. The presence of an odd cycle is the only obstacle to a graph being bipartite. The algorithm we will use to test whether a graph is bipartite is basically laid over BFS with the added complexity that the odd layers will be colored blue and the even layers will be colored red. Therefore, the total runtime for the algorithm is O(n + m). If there is no edge in the graph that joins two nodes of the same layer, the graph is bipartite. If there is an edge that joins two nodes in the same layer, the graph is not bipartite.
  
 ==Notes== ==Notes==
-  * A bipartite graph is one where the node set V can be partitioned into sets X and Y in such a way that every edge has one end in X and the other end in Y. Set X will be colored red and set Y will be colored blue+  * A bipartite graph is one where the node set V can be partitioned into sets X and Y in such a way that every edge has one end in X and the other end in Y. Set X will be colored red and set Y will be colored blue.
  
 The Problem The Problem
Line 218: Line 218:
 Analyzing the Algorithm Analyzing the Algorithm
   * Let G be a connected graph, and let L1, L2,… be the layers produced by BFS starting at node s. Then exactly one of the following two statements must hold true.   * Let G be a connected graph, and let L1, L2,… be the layers produced by BFS starting at node s. Then exactly one of the following two statements must hold true.
-    * There is no edge of G joining two nodes of the same layer. In this case C is a bipartite graph in which the odes in even layers can be colored red, and the nodes in odd layers can be colored blue.+    * There is no edge of G joining two nodes of the same layer. In this case C is a bipartite graph in which the nodes in even layers can be colored red, and the nodes in odd layers can be colored blue.
       * Proof: Every edge joins two nodes in adjacent layers. Our coloring procedure gives nodes in adjacent layers the opposite colors, so every edge has ends with opposite colors.       * Proof: Every edge joins two nodes in adjacent layers. Our coloring procedure gives nodes in adjacent layers the opposite colors, so every edge has ends with opposite colors.
     * There is an edge of G joining two nodes of the same layer. In this case, G contains an odd-length cycle, and so it cannot be bipartite.     * There is an edge of G joining two nodes of the same layer. In this case, G contains an odd-length cycle, and so it cannot be bipartite.
       * Proof: If two nodes x and y in the same layer are joined by an edge, then the cycle through x, y and their lowest common ancestor z has odd length, so the graph cannot be bipartite.       * Proof: If two nodes x and y in the same layer are joined by an edge, then the cycle through x, y and their lowest common ancestor z has odd length, so the graph cannot be bipartite.
  
-==Questions==+==Additional Information== 
 +The concept of an odd cycle was initially kind of confusing to me when it was explained in class. However, now to me it is clear that an odd cycle is a cycle that has an odd number of nodes in it and, therefore, has an odd number of edges. This would make a graph unable to be bipartite because starting at any point in the cycle, the first node you color and the last node that you color will have to be the same color. Therefore, the odd cycle within a graph keeps the graph from being defined as bipartite. 
  
-==Additional Information== 
 On a scale of 1 to 10, this section was a 9 in terms of readability. I was a little nervous about how easy it would be to comprehend this section because the section relied heavily on the concept of visualizing the binary coloring of theoretical nodes. However, the authors did a fantastic job of using the colors to clearly explain how to check if a graph is bipartite. On a scale of 1 to 10, this section was a 9 in terms of readability. I was a little nervous about how easy it would be to comprehend this section because the section relied heavily on the concept of visualizing the binary coloring of theoretical nodes. However, the authors did a fantastic job of using the colors to clearly explain how to check if a graph is bipartite.
 =======3.5 Connectivity in Directed Graphs======= =======3.5 Connectivity in Directed Graphs=======
  
 ==Summary== ==Summary==
 +In directed graphs, the edge (u, v) represents an edge that goes from u to v; edges have a direction. To represent directed graphs, we will still use adjacency lists, but, now, there are two lists that are associated with each node. One list represents the nodes that the given node has edges to (outgoing edges); the other list represents the nodes that the given node has edges from (incoming edges). Directed BFS and directed DFS still run in O(n + m) runtime; the only thing that changes is the fact that the algorithms have to follow the rules of the directional edges. A strongly connected graph if, for every two nodes u and v, there is a path from u to v and a path from v to u. If u can be reached from v and if v can be reached from u, then the two nodes are mutually reachable. An algorithm checking for strong connectivity merely requires BFS over graph G and graph G with its edges reversed. If both searches yield the same results, the component is strongly connected. This algorithm runs in O(m + n) runtime.
  
 ==Notes== ==Notes==
Line 252: Line 253:
 ==Additional Information== ==Additional Information==
 One thing that I was struggling to understand at first was how you can get set of nodes with paths to s instead of getting a set of nodes that s has paths to. This was difficult at first for me to grasp because with getting a set of nodes that s has a path to, it is just performing BFS and DFS with the additional condition of edge directions. However, after reading the section a second time, I was able to understand how you can get a set of nodes with paths to s. You merely need to reverse the direction of all of the edges in the original graph G and put the in a new graph, and traverse that one with BFS or DFS from s. I'm still a little confused on how exactly you can reverse the directions of the original graph. Hopefully, that will be explained to us in class. One thing that I was struggling to understand at first was how you can get set of nodes with paths to s instead of getting a set of nodes that s has paths to. This was difficult at first for me to grasp because with getting a set of nodes that s has a path to, it is just performing BFS and DFS with the additional condition of edge directions. However, after reading the section a second time, I was able to understand how you can get a set of nodes with paths to s. You merely need to reverse the direction of all of the edges in the original graph G and put the in a new graph, and traverse that one with BFS or DFS from s. I'm still a little confused on how exactly you can reverse the directions of the original graph. Hopefully, that will be explained to us in class.
 +
 On a scale of 1 to 10, this section was a 10 in terms of readability. This section was merely an extrapolation on what we learned in section 3.1 with directed graph connectivity and 3.2 and 3.3 with BFS and DFS. The authors did a fantastic job of explaining how directed graphs really aren't that different than undirected graphs; they just have the added condition of an edge having direction. This explanation went along well with their explanations of directed BFS, DFS, and strong connectivity.  On a scale of 1 to 10, this section was a 10 in terms of readability. This section was merely an extrapolation on what we learned in section 3.1 with directed graph connectivity and 3.2 and 3.3 with BFS and DFS. The authors did a fantastic job of explaining how directed graphs really aren't that different than undirected graphs; they just have the added condition of an edge having direction. This explanation went along well with their explanations of directed BFS, DFS, and strong connectivity. 
 =======3.6 Directed Acyclic Graphs and Topological Ordering======= =======3.6 Directed Acyclic Graphs and Topological Ordering=======
  
 ==Summary== ==Summary==
 +A directed graph is acyclic if it has no cycles. A topological ordering of a graph is an ordering of its nodes as v1, v2,… so that i<j for every edge (vi, vj). In other words, all of the edges are pointing forward. All DAGs have topological orderings. The first node in the topological ordering of an directed acyclic graph would need to have no incoming edges. We can implement an algorithm for finding the topological ordering of a DAG in O(m + n) time by keeping track of the active nodes, keeping track of the number of incoming edges for a given node and maintaining a set that has all of the active nodes in the graph with no incoming edges from other active nodes. The active nodes with no active incoming edges are the ones that will be deleted and added to the topological ordering next because there are no tasks that have precedence over that given node.
  
 ==Notes== ==Notes==
Line 285: Line 288:
   * This way, we keep track of all nodes eligible for deleting at all times and expend constant work per edge.   * This way, we keep track of all nodes eligible for deleting at all times and expend constant work per edge.
  
-==Questions==+==Additional Information== 
 +At first, I was confused by the concept of implementing the topological order algorithm to make it have the tighter bound of O(m + n). But after reading that portion of the section again, it became a little more clear to me. What the authors are saying is that in order to get a tighter bound on the runtime, we need to be more efficient in finding the nodes that we are trying to delete. To do this we need to keep track of the number of incoming active edges a given node has and the set of all active nodes in graph G that has no incoming edges of active nodes. This would make the algorithm run more efficiently because the next node that you should delete to add to the topological ordering is the node that has no incoming active edges. This is because at that point, there are no process that have precedence and need to be performed before the node with no incoming edges. The active incoming edges symbolize the fact that there are processes that must be performed before that given node can be popped off.
  
-==Additional Information== 
 On a scale of 1 to 10, this section is a 8 in terms of readability. The only reason that this section is not a 10 is because it took me a couple of reads to fully understand topological ordering and the runtime of the algorithm, which is used to compute it. On a scale of 1 to 10, this section is a 8 in terms of readability. The only reason that this section is not a 10 is because it took me a couple of reads to fully understand topological ordering and the runtime of the algorithm, which is used to compute it.
courses/cs211/winter2018/journals/nasona/chapter3.1517754400.txt.gz · Last modified: by nasona
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0