Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| courses:cs211:winter2018:journals:beckg:ch3 [2018/02/06 23:55] – beckg | courses:cs211:winter2018:journals:beckg:ch3 [2018/02/07 04:26] (current) – beckg | ||
|---|---|---|---|
| Line 61: | Line 61: | ||
| An //adjacency matrix// is an //n x n// matrix //A// where //A[u,v]// is 1 if the edge //(u,v)// is in the graph. The same works for the other direction, which supports directed graphs as well. Unfortunately, | An //adjacency matrix// is an //n x n// matrix //A// where //A[u,v]// is 1 if the edge //(u,v)// is in the graph. The same works for the other direction, which supports directed graphs as well. Unfortunately, | ||
| - | //Adjacency lists// improve on much of these issues. What we have is an array | + | //Adjacency lists// improve on much of these issues. What we have is an array '' |
| + | |||
| + | Adjacency lists require only //O(m + n)// space. This is because '' | ||
| + | |||
| + | Additionally, | ||
| + | |||
| + | === Implementing Breadth-First Search === | ||
| + | To do the following algorithm for the adjacency list structure, we simply maintain an array '' | ||
| + | |||
| + | **Disclaimer: | ||
| + | |||
| + | == BFS(s) == | ||
| + | '' | ||
| + | Initialize //L[0]// to consist of just node //s//\\ | ||
| + | Set layer counter //i = 0//\\ | ||
| + | Set current BFS tree T to be an empty set\\ | ||
| + | While //L[i]// is not empty:\\ | ||
| + | Initialize an empty list // | ||
| + | For each node //u in L[i]//:\\ | ||
| + | Consider each edge //(u,v)// incident to //u//\\ | ||
| + | If Discovered[// | ||
| + | Set Discovered[// | ||
| + | Add edge //(u,v)// to tree T\\ | ||
| + | Add //v// to list // | ||
| + | Endif\\ | ||
| + | Endfor\\ | ||
| + | i += 1\\ | ||
| + | Endwhile'' | ||
| + | |||
| + | This algorithm runs in time //O(m + n)//. Note that instead of various lists we could use a queue to maintain the levels. | ||
| + | |||
| + | |||
| + | === Implementing Depth-First Search === | ||
| + | For DFS, we make use of a stack, and then instead of the '' | ||
| + | |||
| + | == DFS(s) == | ||
| + | '' | ||
| + | While //S// is not empty:\\ | ||
| + | Take a node //u// from //S//\\ | ||
| + | If Explored[// | ||
| + | Set Explored[// | ||
| + | For each edge //(u,v)// incident to //u//:\\ | ||
| + | Add //v// to the stack //S//\\ | ||
| + | Endfor\\ | ||
| + | Endif\\ | ||
| + | Endwhile'' | ||
| + | |||
| + | We can further use an array '' | ||
| + | This algorithm runs in time //O(m + n)//. | ||
| + | |||
| + | Additionally, | ||
| + | |||
| + | This section explained the algorithms very well, I would give it a 9/10. | ||
| + | |||
| + | ===== 3.4: Testing Bipartiteness with Breadth-First Search ===== | ||
| + | Recall that a bipartite graph is one that can have its nodes partitioned into sets //X// and //Y// such that each edge has one end in //X// and the other in //Y//. | ||
| + | |||
| + | A graph is bipartite if and only if it contains no odd cycles. | ||
| + | |||
| + | We can implement an algorithm for testing this using the BFS search tree. We simply create an array '' | ||
| + | |||
| + | Obviously brief, this section complemented our class discussion of the process very well, though I do think it could have been a little more concise in the order of its claims; an 8/10. | ||
| + | |||
| + | |||
| + | ===== 3.5: Connectivity in Directed Graphs ===== | ||
| + | Directed graphs simply mean that edge //(u,v)// indicates an edge //from// //u// //to// //v//. To represent these, we simply give each node two lists: one which contains the nodes //to which// it has edges, and the other which has all the nodes //from which// edges are incident to it. | ||
| + | |||
| + | DFS and BFS run essentially the exact same way with these, simply following the edges " | ||
| + | |||
| + | ===Strong Connectivity=== | ||
| + | This above function becomes useful in determining strong connectivity of a given graph. First, we define two nodes to be //mutually reachable// if there is a path from //u// to //v// and from //v// to //u//. Additionally, | ||
| + | |||
| + | If every pair of nodes in a graph is mutually reachable, then we call the graph //strongly connected// | ||
| + | |||
| + | So, we can define the //strong component// of a given node to be the set of all other nodes to which it is strongly reachable. This is similar to the connected components of undirected graphs. Then, we have that: | ||
| + | |||
| + | For any two nodes //s// and //t// in a directed graph, their strong components are either identical or disjoint. | ||
| + | |||
| + | Just as with the connected components, we can determine all of the strong components of a directed graph in, wait for it, //O(m + n)// time. | ||
| + | |||
| + | This was a lovely chapter that expounded on strong connectivity very smoothly; 10/10. | ||
| + | |||
| + | |||
| + | |||
| + | ===== 3.6: Directed Acyclic Graphs Topological Ordering ===== | ||
| + | A directed graph with no cycles is //directed acyclic graph// or //DAG//. We define a // | ||
| + | |||
| + | From the fact that a //DAG// must have a node with no incoming edges, we then find the stronger statement that: | ||
| + | |||
| + | //G// is a //DAG// iff it has a topological ordering. | ||
| + | |||
| + | ==Algorithm to compute topological ordering of G== | ||
| + | '' | ||
| + | Delete //v// from //G// | ||
| + | Recursively compute topological ordering of //G-{v}// and append this after // | ||
| + | |||
| + | Thanks to the above theorem, we know that there will always be such a node //v// to compute. The total running time is // | ||
| + | |||
| + | We can actually achieve this //O(m + n)// time if we are more efficient in finding the nodes that have no incoming edges. We simply maintain for each node the number of incoming edges that it has. Then that number iterates downwards when one its incoming neighbor is deleted, to where we are aware of which node has zero. | ||
| + | |||
| + | This section also did a great job of explaining, and I am a big fan of the giving us the general algorithm, and then a method to make it more efficient. This will prove very beneficial in developing/ | ||
| + | |||
