Chapter 3

3.1 Basic Definitions and Applications

  • A graph is a connection of nodes and edges, which may be either directed or undirected; this can model a variety of complexes and relationships, especially networks (whether physical or notional); trees are graphs that do not contain cycles. A typical operation on a graph is traversal between nodes along edges, and more generally the search for possible and optimal traversals.
  • I would have liked to see some examples of graphs representing things other than networks, like the prior example of phrasing stable matching as a graph-solving problem. They gave a lot of examples, but all fell into two or three sets up to isomorphism.
  • No questions.
  • No other complaints. 7/10.

3.2 Graph Connectivity and Traversal

  • Two common patterns to algorithms answering questions about graphs are breadth-first and depth-first search; the former is particularly useful for finding shortest roots and analyzing cycles, and the latter where backtracking is easier than arbitrary jumping; both are suited to finding connected components. In BFS two nodes sharing an edge must be in the same layer or subsequent layers; in DFS if {a,b} is an edge either a is an ancestor of b or vice versa.
  • No insights.
  • No questions.
  • No complaints.. 7/10.

3.3 Implementing Graph Traversal Using Queues and Stacks

  • BFS and DFS have a relationship analogous to that between queues and stacks. Both can be implemented in two-parameter linear time, O(m+n) (assuming a graph represented as an adjacency list, the most efficient for most operations on sparse graphs). For BFS, at each layer, find all the not-yet discovered nodes connected to the layer under consideration, and make that layer the next. For DFS, when exploring a node, add all adjacent nodes to a stack, take nodes off the stack until reaching one not yet discovered, and explore it.
  • No insights.
  • I certainly see the stack aspect of DFS. But the relationship between BFS and queues eludes me, since both the layers and the nodes within each layer can be kept in lists or queues without algorithmic implications.
  • Reading this after going through the algorithms in class made it somewhat repetitious, but I cannot find fault. 7/10.

3.4 Testing Bipartiteness: An Application of BFS

  • A bipartite graph is one in which the nodes can be divided into two sets such that no edge joins two members of the same set; alternatively (and equivalently), bipartite graphs are those containing no odd cycles. Bipartiteness is easily checked by an adaptation of BFS, keeping a separate array for colour (still O(m+n)) and then traversing the edges (also O(m+n)) looking for edges joining same-coloured nodes.
  • No insights.
  • No questions.
  • I think the proof of correctness somewhat circuitous–the algorithm not only proves bipartiteness but produces a colouring, and it seems easy to show that the colouring is valid. This whole business of odd cycles is itself fascinating, but demeans the algorithm from a construction to a proof of existence. 6/10.

3.5 Connectivity in Directed Graphs

  • Directed graphs can use an expanded adjacency list representation, in which under each node is recorded both outgoing and incoming edges. BFS and DFS work as before, but the connected set generated does not have all the same properties; in particular it is not known whether there is a path between two nodes in the BFS output unless the start is the root of the graph (since DFS preserves ancestor information, it is more informative). A set is strongly connected if, given any two nodes in the set, there is a path between them in each direction. This can be checked by searching in both a graph and its reverse from the same point; all members of both connected sets are strongly connected. These strongly connected sets are the directed analogue of connected sets in undirected graphs.
  • No insights.
  • No questions.
  • No complaints. 7/10.

3.6 Directed Acyclic Graphs and Topological Ordering

  • DAGs appear in such contexts as dependency networks (at least one hopes that one's dependencies are acyclic). a topological ordering of a DAG is an ordering of the nodes such that every edge connects nodes in order. For directed graphs, acyclicality is equivalent to the existence of a topological ordering. Every DAG must have a node with no incoming edges; recursive application of this fact yields a topological ordering. This can be done in O(m+n) time by keeping an array of the number of incoming connections to each node and then, whenever adding a node to the ordering, decrementing that number for each of the nodes to which it leads.
  • I had been wondering how to do this in conceptual work for a project I shall probably never get to (applying parallelization and large-dataset techniques (as discussed yesterday) to econometrics)–I am consistently impressed at how elegant some of these solutions are.
  • Does not the O(m+n) runtime of the algorithm depend on being able to select an edge with no remaining ancestors in constant time? If it involves a scan through the array of remaining parents, I would think that the inner loop would be O(n^2), since it must repeat once for each node and each step is O(n) on account of the scan.
  • Well written. 8/10.
courses/cs211/winter2012/journals/ian/chapter3.txt · Last modified: 2012/02/01 03:11 by sturdyi
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0