This is an old revision of the document!


Chapter 3 - Graphs

Section 3.1 - Basic Definitions and Applications

This section offers a very broad introduction to graphs themselves. We are offered a definition of graphs: a collection of both edges and nodes, where the edges connect the nodes. There are also both directed and undirected graphs. In a directed graph, the edges actually have a tail and a head. The author then walks through various examples of graphs so that we can have a better sense for exactly how and what they represent.

The author then walks through the concept of connectivity. This brings into play the idea of paths between nodes which can be represented as a series of nodes, which are connected by edges. Additionally, there must not always be a finite distance between two nodes.

Following this, the author briefly covers trees, and the role they play in the graphing world. Having just covered heaps in Chapter 2, these seem very similar with some small variations. This section offered a strong introduction to graphs and was both understandable and concise. I would rate the readability of this section at 9/10.

Section 3.2 - Graph Connectivity and Graph Traversal

The section by laying out the problem of connectivity. For two nodes s and t, we want to be able to determine if they are connected. We do this by defining a set of nodes called the connected component. This connected component of a node s contains all nodes which are connected to s.

The section then goes on to illustrate a few different ways to generate the connected component - the first being the breadth-first search (BFS). The BFS starts at a node, call it 's', and then branches out from that node s to create a layer, L_1 (layer 1). That layer contains all of the nodes to which s is directly connected. The process is then done again, but L_2 contains all of those nodes that are connected to all nodes in L_1, but does not add any nodes which are already in the set. This process continues until we reach an empty layer. The book also walks through the fact that for any node in L_i, that node is exactly a distance of i away from 's'. And that the structure of a BFS can be represented as a tree rooted at the start node.

The section then moves on to the concept of exploring a connected component. It lays forth an algorithm to seek out the connected component in a vague manner. It just continually adds nodes to a set R which are connected to nodes in the component, R, until it cannot any longer. The set R begins as just the start node, 's'. I found the proof that this algorithm works slightly confusing, but I agree with the intuition behind it.

The section continues on to the depth-first search (DFS). This starts at a node s, and rather than defining layers, it continues to one node, then adds it to the component, then recursively calls itself. Eventually, when it reaches a stopping point, the algorithm backtracks to previous calls, and recalls itself to see if there was a different, deeper, path it could have explored. I found the proof of (3.6) to be very helpful, as I found the concept slightly confusing in class.

The section ends by outlining the set of all connected components and proves that for any two nodes, their connected component is either identical or disjoint. I found this proof very straight-forward. I enjoyed reading this section and found it to be a helpful refresher from class. I would rate the readability at 9/10.

Section 3.3 - Implementing Graph Traversal Using Queues and Stacks

The section begins by providing a short summary of what will be covered. We will walk through the different ways for implementing graphs, and then the same concept, but with the implementation of graph-traversal algorithms.

The section then outlines the two ways of representing a graph, an adjacency matrix and an adjacency list. And although the adjacency matrix maintains constant access runtime to access a particular connection between two nodes. The book also walks through the proof that the adjacency list takes only O(m+n) space whereas the matrix takes O(n^2) space. I found this proof very helpful, as I was a bit hazy on using two separate variables after covering it in class. Because of this improved space (as m is always less than or equal to n^2), the book uses the adjacency lists for further implementation.

The section then walks through the implementation for the BFS. The algorithm uses an array of lists in order to manage each layer in the adjacency list. It also maintains an array length n to maintain if each node is discovered or no. It takes the start node, s, and sets it's value to true in the Discovered array, let's L_0 consist of 's', sets the layer counter 'i' to 0, and sets the current BFS tree T = empty set. It then begins a while loop that stops when L[i], the i-th layer, is empty. In the while loop, it forms an empty list at L[i+1], considers each node u in L[i], and for each edge (u,v) incident to u, if the node v is not discovered, we set it to discovered, add the edge (u,v) to the BFS tree T, and add v to the list L[i+1]. the for-loop terminates, the layer counter is incremented, and the while loop continues. The book then proves, very concisely and logically, that the implementation of the BFS algorithm runs in O(m+n) time. I was a little confused why the section made a point to say that we would use queues to implement this (at the beginning) but then went on to note here that we could use either queues or lists without a difference.

The section then walks through the implementation of the depth-first search (DFS). This algorithm uses a stack for implementation. It initializes S to be a stack with one element, the start node 's'. And then while S is nonempty, it takes a node u from S, and if that node has not been explored, it sets the value to explored, adds each edge (u,v) incident to u to the stack S. And then by the termination of the algorithm, all nodes in the DFS will have been set to explored. The section notes that the algorithm will examine all of the nodes in the same order as the DFS, but the adjacency lists are processed in reverse order (because we are using a stack). The DFS algorithm also runs in O(n+m) time.

courses/cs211/winter2018/journals/thetfordt/chapter3.1517771772.txt.gz · Last modified: by thetfordt
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0