Chapter three discusses the concept of graphs, which can be used to represent transportation networks, communication networks, information networks, social networks and dependency networks in the real world. Graphs consist of nodes or vertices connected by edges, and can be directed graphs if there is an asymmetrical relationship between two connected nodes, or undirected meaning there is a symmetrical relationship between the two nodes. A path in a undirected graph G = (V,E) is a sequence P of nodes v1, v2, . . . , vk-1, vk with the property that each consecutive pair is joined by an edge in G. A path is simple if the vertices are distinct from one another, and the paths create a cycle is the sequence of nodes is “cycles back” to the first node after connecting all other nodes once. An undirected graph is connected if for every pair of nodes u and v there is a path between them. A directed graph is strongly connected if for every two nodes u and v there is a path from u to v and from v to u.

We can represent an undirected graph as a tree (p. 77). A tree has a hierarchical structure, and the property that every n-node tree has exactly n-1 edges. We can traverse a tree using breadth-first search or depth-first search. In BFS, we “explore outward from s in all possible directions, adding nodes one layer at a time” (79) and can keep track of what nodes exist in the tree and the shortest paths to each (p. 80). The book defines the following property of all BFS trees: Let T be a BFS tree, let x and y be nodes in T belonging to layers Li and Lj respectively, and let (x, y) be an edge of G. Then i and j differ by at most 1. We can think of DFS as if we are walking through a maze. We follow one path until we hit a dead-end, the turn around and walk back the way we came until we find a path we haven’t walked yet and go as far as we can that way. In short, in DFS we traverse as far down a tree as we can, and when we get to a leaf node we go back up the tree until we find a branch we haven’t explored yet. DFS and BFS both visit the same number of nodes, but they do this in different order with BFS going wide and DFS going long, in a sense. The diagram on p. 85 illustrates this concept, and we also discussed it in class.

We can also represent graphs using an adjacency matrix or an adjacency list. Is we use an adjacency matrix, we can check the presence of an edge in O(1) time and store the matrix in O(n^2) space since it uses an n x n matrix. The adjacency list can be stored in less space, specifically O(m + n), but I don’t understand this concept enough to explain why. It has something to do with m being the size of the list, but I don’t understand how we get to m +n.

We can maintain a set of elements as a queue or a stack as well. A queue has the characteristic of storing and releasing elements in a first-in first-out manner, while a stack does this in a last-in first-out manner. Pp. 90-3 has algorithms showing implementations of BFS as a queue and DFS as a stack.

We can also use BFS to determine if a graph is bipartite, meaning it does not contain an odd cycle (algorithm on p. 96). We went over this concept in class before I read the section, and I already had a really good understanding of bipartite graphs when I did.

I give this chapter a 9, because while I’m still a little fuzzy on some running time concepts, I feel like I have a strong comprehension of BFS, DFS, connectivity, adjacency matrix vs. list and generally constructing and traversing trees. I like how the book generally explains things with a combination of text, algorithms and visuals.

courses/cs211/winter2012/journals/shannon/entry_3_chapter_3.1-5.txt · Last modified: 2012/01/31 22:39 by mcgoverns
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0