Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2018:journals:boyese:chapter3 [2018/02/06 23:37] – [Section 3.6: Directed Acyclic Graphs and Topological Ordering] boyesecourses:cs211:winter2018:journals:boyese:chapter3 [2018/02/07 00:51] (current) – [Section 3.6: Directed Acyclic Graphs and Topological Ordering] boyese
Line 94: Line 94:
  
 **Theorem 3.8** //For any two nodes s and t in a graph, their connected components are either identical or disjoint.// **Theorem 3.8** //For any two nodes s and t in a graph, their connected components are either identical or disjoint.//
 +
 +I thought this section was a lot easier to understand after last week's homework, problem set 3. I think that I learn best by doing as opposed to reading or listening so the problem sets are always helpful when trying to understand the reading. I thought this section had a lot of extraneous information that made it harder to read, so I would give it a 6/10 for readability.
  
 =====Section 3.3: Implementing Graph Traversal Using Queues and Stacks===== =====Section 3.3: Implementing Graph Traversal Using Queues and Stacks=====
Line 174: Line 176:
 Both BFS and DFS spend work only on edges and nodes in the connected component containing the starting node, s, meaning they never see any of the other nodes or edges. Thus the algorithm used to find all connected components of a graph only spends a constant amount of work on a given edge or node in the iteration when the connected component it belongs to is under consideration. Thus the overall running time of this algorithm is still O(m + n). Both BFS and DFS spend work only on edges and nodes in the connected component containing the starting node, s, meaning they never see any of the other nodes or edges. Thus the algorithm used to find all connected components of a graph only spends a constant amount of work on a given edge or node in the iteration when the connected component it belongs to is under consideration. Thus the overall running time of this algorithm is still O(m + n).
  
 +Again, after analyzing the algorithms in the problem set from last week I was better able to understand this section. I would give it a 7/10 for readability.
 =====Section 3.4: Testing Bipartiteness: An Application of Breadth-First Search===== =====Section 3.4: Testing Bipartiteness: An Application of Breadth-First Search=====
  
Line 201: Line 204:
 (ii) There is an edge of G joining two nodes of the same layer. In this case, G contains an odd-length cycle, and so it cannot be bipartite.// (ii) There is an edge of G joining two nodes of the same layer. In this case, G contains an odd-length cycle, and so it cannot be bipartite.//
  
 +I'm still a bit unsure of what being bipartite means and why it's important. I think testing a graph for bipartiteness as a homework exercise would help me to understand better how the algorithm works. I also am not sure what the applications of knowing whether a graph is bipartite or not are, or if the point of introducing it in this chapter is another way of showing how BFS can be used. I would give this section a 6/10 for readability because I think it was very technical.
 =====Section 3.5: Connectivity in Directed Graphs===== =====Section 3.5: Connectivity in Directed Graphs=====
  
Line 223: Line 226:
  
 **Theorem 3.17** //For any two nodes s and t in a directed graph, their strong components are either identical or disjoint.// **Theorem 3.17** //For any two nodes s and t in a directed graph, their strong components are either identical or disjoint.//
 +
 +The idea of this section was relatively simple and I think the explanation was too long winded for the simplicity of the idea. I am also wondering how connected graphs can be used differently than non-connected graphs and what their purpose is in computer science. This section was short and to the point so I would give it an 8/10 for readability. 
 =====Section 3.6: Directed Acyclic Graphs and Topological Ordering===== =====Section 3.6: Directed Acyclic Graphs and Topological Ordering=====
 +
 +If a directed graph has no cycles, we call it a **//directed acyclic graph//**, or a DAG for short.
  
 ===The Problem=== ===The Problem===
 +DAGs can be used to encode precedence relations or dependencies in a natural way. To represent a precedence relation, we can introduce a node for each task, and a directed edge (i, j) whenever i must be done before j. For a directed graph G, we say that a **//topological ordering//** of G is an ordering of its nodes as v<sub>l</sub>, v<sub>2</sub> ..... v<sub>n</sub> so that for every edge (v<sub>i</sub>, v<sub>j</sub>), we have i < j. A topological ordering on tasks provides an order in which they can be safely performed.
  
-**Theorem 3.18** +**Theorem 3.18** //If G has topological ordering, then G is a DAG.//
- +
-=Computing Topological Ordering=+
  
 ===Designing and Analyzing the Algorithm=== ===Designing and Analyzing the Algorithm===
  
-**Theorem 3.19**+The question we want to answer is does every DAG have a topological ordering, and if so, how do we find one efficiently?  
  
-**Theorem 3.20**+**Theorem 3.19** //In every DAG G, there is a node v with no incoming edges.// 
 + 
 +The existence of such a node v is all we need to produce a topological ordering of G by induction. 
 + 
 +**Theorem 3.20** //If G is a DAG, then G has a topological ordering.// 
 + 
 +The inductive proof contains the following algorithm to compute a topological ordering of G:
  
 <code> <code>
Line 242: Line 254:
 Delete v from G Delete v from G
 Recursively compute a topological ordering of G-{v} Recursively compute a topological ordering of G-{v}
-and append this order after v+    and append this order after v
 </code> </code>
 +
 +To bound the running time of this algorithm, we note that identifying a node v with no incoming edges, and deleting it from G, can be done in O(n) time. Since the algorithm runs for n iterations, the total running time is O(n<sup>2</sup>). This is not a bad running time; and if G is very dense, containing Θ(n<sup>2</sup>) edges, then it is linear in the size of the input. We may want something better when the number of edges m is much less than n<sup>2</sup>. In such a case, a running time of O(m + n) could be a significant improvement over Θ(n<sup>2</sup>), and indeed this is possible.
 +
 +I thought this section was interesting because of it's applications in the real world and in programming. I would give this section a 9/10 for readability because it was short and to the point. 
courses/cs211/winter2018/journals/boyese/chapter3.1517960225.txt.gz · Last modified: by boyese
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0