Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revision | |||
| courses:cs211:winter2018:journals:goldm:ch3 [2018/02/06 18:38] – goldm | courses:cs211:winter2018:journals:goldm:ch3 [2018/02/07 03:19] (current) – goldm | ||
|---|---|---|---|
| Line 12: | Line 12: | ||
| This sections open by discussing the need for an algorithm to determine s-t connectivity, | This sections open by discussing the need for an algorithm to determine s-t connectivity, | ||
| + | It goes on to discuss connected components, specifically, | ||
| + | The next means of finding the connected component of s is a depth first search, similar to traversing a maze, you go from s to the first edge, which goes to v. Then you go from v to the node connected by the first edge of v. You keep doing this until you reach a dead end. Then you go back until you reach a node with unexplored edges, then you explore the next edge in the same fashion. You keep doing this until eventually you are back to s and have explored all of its edges. Performing a depth first search leads to a depth first search tree, which typically is very different from a breadth first search tree as it visits nodes in a different order. While a BFS tree is " | ||
| + | |||
| + | This section was really long and tedious, and we have been discussing the material in depth in class already, so this section gets a 2/10. | ||
| + | |||
| + | ===== 3.3: Implementing Graph Traversal Using Queues and Stacks ===== | ||
| + | |||
| + | The previous two sections discuss the theoretical background of graphs and traversing them, but this section begins the dive into actually implementing them practically. The two main methods of representing a graph are adjacency lists and adjacency matrices. The book uses n to discuss the number of vertices and m to discuss the number of edges. Given this, the book calls O(n + m) linear time. If a graph contains n nodes, an adjacency matrix, A, would be nxn. Entry A[u,v] would equal 1 if an edge exists from u to v and 0 otherwise. In undirected graphs, the matrix is symmetric. Obviously, it may be asymmetric in a directed graph. This representation allows us to check for an edge in constant time, but the space complexity is tightly bound by n squared. The matrix representation also inefficiently checks for edges from a node as it must check every adjacent entry in the matrix, which is timely. For these reasons, as well as those in favor of lists, the book uses the adjacency list representation. This representation tracks an array, A, where A[v] returns a linked list of all nodes connected to v. The space complexity of this representation is only O(n + m). Finding if a node is connected to v requires iterating through the list returned by A[v] which is linear with regards to the number of edges of v. According to the book, the adjacency list is ideal for performing BFS. It performs BFS by maintaining an array, Discovered, which has the entry Discovered[v] set to true if v has been discovered already. We use a list L sub i, to track the nodes in each ith layer. These lists can be implemented by stack or queue, it makes no difference. Using this model, the BFS algorithm runs in O(n + m). | ||
| + | Depth first search' | ||
| + | Given the way the implementation for bfs and dfs was constructed, | ||
| + | |||
| + | This section once again was simply recap of previously explained information, | ||
| + | |||
| + | ===== 3.4: Testing Bipartiteness: | ||
| + | |||
| + | We begin by remembering that a graph is a bipartite graph if its nodes can be divided into two sets such that every edge connects one node from each set. The book presents the fact that if a graph is bipartite, it cannot contain an odd cycle. For purposes of the analysis of an algorithm to check for bipartiteness, | ||
| + | |||
| + | While this section was still only a review, at least it was way shorter; 4/10. | ||
| + | |||
| + | ===== 3.5: Connectivity in Directed Graphs ===== | ||
| + | |||
| + | This section details the necessary adjustments to the previous logic so that it applies to our directed graphs. In order to represent a directed graph, the book uses a modified adjacency list. Instead of having one list per node, it has two. One which contains the nodes reachable from v and one which contains the nodes that reach v. The breadth-first search for a directed graphs simply adds nodes to layer L(i+1) if there is a directed edge from a node in L(i) to that node. The algorithm still runs in O(n+m). It is worth noting that breadth-first search of s in a directed graphs gives you all nodes such that there is a path from s to t, but there may not be a path from t to s. The implementation of DFS follows the same minor changes as BFS and still runs in linear time. | ||
| + | A directed graph is strongly connected if there is an edge from any node to any other node in both directions. Two nodes in a directed graph are mutually reachable if there is a path from a to b as well as a path from b to a. It is worth noting that mutual reachability is a transitive property. From these definitions, | ||
| + | |||
| + | I found this section slightly more interesting than the previous few; also, it was not too dense, this book often seems exceptionally dense. This section gets a 4.5/10. | ||
| + | |||
| + | ===== 3.6: Directed Acyclic Graphs and Topological Ordering===== | ||
| + | |||
| + | In the world of undirected graphs, being acyclic makes a graph rather simple and uninteresting. In the world of directed graphs, however, a directed graph without a directed cycle can still manage to be interesting. For starters, this does not outright force the graph to have few edges, it can still have a good number of edges. Directed Acyclic Graphs are commonly referred to as DAGs. DAGs are important; some of their uses include encoding precedence relations and dependencies. This can be useful by computers to organize processes. If there was a cycle, there would be a loop of precedence that would make it impossible to decide what to do in certain situations. A topological ordering of a graph is an ordering of the nodes {v1, | ||
| + | |||
| + | This section while once again, review, covers a fairly interesting topic and did not drag on too long, at least in comparison to 3.2 and 3.3. As such, it gets a 4.6/10. | ||
