Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| courses:cs211:winter2018:journals:devlinn:chapter3 [2018/02/04 17:32] – devlinn | courses:cs211:winter2018:journals:devlinn:chapter3 [2018/02/04 19:05] (current) – [3.3 Implementing Graph Traversal Using Queues and Stacks] devlinn | ||
|---|---|---|---|
| Line 32: | Line 32: | ||
| An important fact is given regarding the relationship between connected components within a graph: //For any two nodes s and t in a graph, their connected components are either identical or disjoint.// | An important fact is given regarding the relationship between connected components within a graph: //For any two nodes s and t in a graph, their connected components are either identical or disjoint.// | ||
| + | |||
| + | This section was a good review and an excellent explanation of BFS and DFS as concepts. I do wish there had been better visuals because the dotted lines do not adequately demonstrate the process. I would still give it a 9 for readability. | ||
| + | |||
| + | ===== 3.3 Implementing Graph Traversal Using Queues and Stacks ===== | ||
| + | The two basic ways to represent graphs are **adjacency matrices** and **adjacency lists**. When assessing the running time of algorithms which use either of these representations, | ||
| + | |||
| + | When processing a set of elements, which often occurs in graph search algorithms, it is critical to determine how to ideally represent these elements. //Stacks// and //queues// are frequently used in this role, because it is frequently necessary to maintain a particular order for traversing the elements. Queue elements are extracted in //first-in, first-out// (FIFO) order, and stack elements are extracted in //last-in, first-out// (LIFO) order. The BFS algorithm uses a queue-like structure, as can be seen in the following sketch of the algorithm: | ||
| + | BFS(s): | ||
| + | set Discovered[s] = true and Discovered[v] = false for all other v | ||
| + | initialize L[0] to consist of s | ||
| + | set layer counter i = 0 | ||
| + | set current BFS tree T = ø | ||
| + | while L[i] not empty: | ||
| + | initialize empty list L[i + 1] | ||
| + | for each node u ∈ L[i] | ||
| + | consider each edge (u, v) incident to u | ||
| + | if discovered[v] = false: | ||
| + | set discovered[v] = true | ||
| + | add edge (u, v) to tree T | ||
| + | add v to list L[i + 1] | ||
| + | endif | ||
| + | endfor | ||
| + | i += 1 | ||
| + | endwhile | ||
| + | | ||
| + | The DFS algorithm, sketched below, uses a stack-inspired implementation: | ||
| + | DFS(s): | ||
| + | initialize S to be a stack with one element s | ||
| + | while S is not empty: | ||
| + | take a node u from S | ||
| + | if Explored[u] = false: | ||
| + | set Explored[u] = true | ||
| + | for each edge (u, v) incident to u | ||
| + | add v to S | ||
| + | endfor | ||
| + | endif | ||
| + | endwhile | ||
| + | Both the BFS and DFS algorithms shown above run in O(//m// + //n//) time when using an adjacency list representation. | ||
| + | |||
| + | Now that we have spent about one week discussing BFS and DFS I feel very comfortable with the concepts and the run time analyzation. This section packed a lot of information in which would have been challenging to understand if I had not already learned BFS and DFS in class. I would rate it a 7 as a result. | ||
| + | |||
| + | ===== 3.4 Testing Bipartiteness: | ||
| + | A graph is // | ||
| + | |||
| + | Let //G// be a connected graph and // | ||
| + | There is no edge of G joining two nodes of the same layer (G is bipartite) | ||
| + | There is an edge ing G joining two nodes of the same layer (G is not bipartite) | ||
| + | | ||
| + | Though this section was short, I found it helpful in the follow-up from our class discussion on bipartiteness. I now feel that I have a better grasp on the definition of bipartite (without the use of colors). I would rate this section a 9. | ||
| + | |||
| + | ===== 3.5 Connectivity in Directed Graphs ===== | ||
| + | Recall: a directed graph' | ||
| + | |||
| + | This section was well-written and straightforward. I would rate it an 8 for understanding since I do feel that I can takeaway an understanding of mutual reachability and its relationship to connectivity. | ||
| + | |||
| + | ===== 3.6 Directed Acyclic Graphs and Topological Ordering ===== | ||
| + | A **directed acyclic graph (DAG)** is a directed graph which contains no cycles. Two real-world examples of DAGs are precedence relations and dependencies. A **topological ordering** of a directed graph //G// is an ordering of its nodes // | ||
| + | to compute a topological ordering of G: | ||
| + | find node v with no incoming edges and order it first | ||
| + | delete v from G | ||
| + | recursively compute a topological ordering of G – {v} and append this order after v | ||
| + | The running time of this algorithm is O(// | ||
| + | |||
| + | This section was a little bit more convoluted than the others since we have not fully covered DAGs or topological orderings in class yet. I understand the basic concepts, but I think the algorithm to topologically order a graph is complicated. I would rate this section a 6 because I do not feel like the sections in the book which discuss a specific problem rather than a broad application are as clear. | ||
