Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2018:journals:beckg:ch3 [2018/02/06 23:55] beckgcourses:cs211:winter2018:journals:beckg:ch3 [2018/02/07 04:26] (current) beckg
Line 61: Line 61:
 An //adjacency matrix// is an //n x n// matrix //A// where //A[u,v]// is 1 if the edge //(u,v)// is in the graph. The same works for the other direction, which supports directed graphs as well. Unfortunately, this representation requires //Theta(n<sup>2</sup>)// space, and also takes //Theta(n)// time to search if a certain edge exists from a given node. For more efficiency in both, particularly when there are far fewer edges, we can use a better representation. An //adjacency matrix// is an //n x n// matrix //A// where //A[u,v]// is 1 if the edge //(u,v)// is in the graph. The same works for the other direction, which supports directed graphs as well. Unfortunately, this representation requires //Theta(n<sup>2</sup>)// space, and also takes //Theta(n)// time to search if a certain edge exists from a given node. For more efficiency in both, particularly when there are far fewer edges, we can use a better representation.
  
-//Adjacency lists// improve on much of these issues. What we have is an array +//Adjacency lists// improve on much of these issues. What we have is an array ''Adj'' with an index for each node. At each ''Adj''[//u//] there is a list containing all the nodes adjacent to //u//. Therefore, in an undirected graph, edge //(u,v)// appears twice--//v// appears in //u//'s list, and vice versa. This very naturally extends to use for directed graphs then. 
 + 
 +Adjacency lists require only //O(m + n)// space. This is because ''Adj'' obviously has //n// spaces, and then the total length of all the lists at those spaces is //2m//. The latter is easily understood through defining the //degree// of a node //v//, //n<sub>v</sub>//, to be the number of incident edges to //v//. The length of the list at ''Adj''[//v//] then is //n<sub>v</sub>//, and sum of this across all nodes is //2m//.  
 + 
 +Additionally, if we are currently at a given node, it is constant time to access any/all of its neighbors. This lends itself to searching efficiently, so in all of our algorithms we will use the adjacency list representation.  
 + 
 +=== Implementing Breadth-First Search === 
 +To do the following algorithm for the adjacency list structure, we simply maintain an array ''Discovered'' of length //n// (one spot corresponding to each node), initially all marked ''False'', and change them to ''True'' whenever we discover a given node. Then, we have a list //L[i]// for each layer, adding the nodes there when we discover them in a given layer. Here, //s// is our source node. 
 + 
 +**Disclaimer: still working on the indents for algorithms 
 + 
 +== BFS(s) == 
 +''Set Discovered[//s//] = true and Discovered[//v//] = false for all other //v//\\ 
 +Initialize //L[0]// to consist of just node //s//\\ 
 +Set layer counter //i = 0//\\ 
 +Set current BFS tree T to be an empty set\\ 
 +While //L[i]// is not empty:\\ 
 +  Initialize an empty list //L[i+1]//\\ 
 +  For each node //u in L[i]//:\\ 
 +    Consider each edge //(u,v)// incident to //u//\\ 
 +    If Discovered[//v//] = False then:\\ 
 +      Set Discovered[//v//] = True\\ 
 +      Add edge //(u,v)// to tree T\\ 
 +      Add //v// to list //L[i+1]//\\ 
 +    Endif\\ 
 +  Endfor\\ 
 +  i += 1\\ 
 +Endwhile'' 
 + 
 +This algorithm runs in time //O(m + n)//. Note that instead of various lists we could use a queue to maintain the levels. 
 + 
 + 
 +=== Implementing Depth-First Search === 
 +For DFS, we make use of a stack, and then instead of the ''Discovered'' array, we use an ''Explored'' array, only marking each node as true after //all// of its edges have been explored. The algorithm is as follows: 
 + 
 +== DFS(s) == 
 +''Initialize //S// to be a stack with one element //s// 
 +While //S// is not empty:\\ 
 +  Take a node //u// from //S//\\ 
 +  If Explored[//u//] = False then:\\ 
 +    Set Explored[//u//] = True\\ 
 +    For each edge //(u,v)// incident to //u//:\\ 
 +      Add //v// to the stack //S//\\ 
 +    Endfor\\ 
 +  Endif\\ 
 +Endwhile'' 
 + 
 +We can further use an array ''Parent'' to create the DFS tree by setting ''Parent''[//v//] = //u// when we add node //v// to the stack via edge //(u,v)//
 +This algorithm runs in time //O(m + n)//.  
 + 
 +Additionally, we can find the set of //all// connected components using either algorithm, and this takes //O(m + n)// time as well. This is because doing one connected component removes those nodes from the next group of nodes in the graph //G// that must be searched next. 
 + 
 +This section explained the algorithms very well, I would give it a 9/10. 
 + 
 +===== 3.4: Testing Bipartiteness with Breadth-First Search ===== 
 +Recall that a bipartite graph is one that can have its nodes partitioned into sets //X// and //Y// such that each edge has one end in //X// and the other in //Y//. 
 + 
 +A graph is bipartite if and only if it contains no odd cycles. 
 + 
 +We can implement an algorithm for testing this using the BFS search tree. We simply create an array ''Color'' in which each node is assigned a color red or blue, alternating with each subsequent level. If ever there is an edge for which both ends have the same color at the end, then it is not bipartite. This runs in //O(m + n)// time as well, a nice recurring theme it seems. The proof for this algorithm hinges simply on the fact that if there is ever an edge in //G// between nodes in the same layer, then there is an odd length cycle, and therefore it is not bipartite. 
 + 
 +Obviously brief, this section complemented our class discussion of the process very well, though I do think it could have been a little more concise in the order of its claims; an 8/10. 
 + 
 + 
 +===== 3.5: Connectivity in Directed Graphs ===== 
 +Directed graphs simply mean that edge //(u,v)// indicates an edge //from// //u// //to// //v//. To represent these, we simply give each node two lists: one which contains the nodes //to which// it has edges, and the other which has all the nodes //from which// edges are incident to it. 
 + 
 +DFS and BFS run essentially the exact same way with these, simply following the edges "out" from the source node. However, the crucial distinction is in //what// these search trees represent. The "connected component" of //s// then is all of the nodes that can be reached //from// it. Note that BFS still gives us the shortest path if one exists. Additionally, we can define a new graph //G<sup>rev</sup>// as the same //G// but with all the directions reversed. Then, running a BFS or DFS in this reversed graph, we know if we reach node //n// //from// //s// in //G<sup>rev</sup>//, then that indicates that it has a path //to// //s// in //G//. 
 + 
 +===Strong Connectivity=== 
 +This above function becomes useful in determining strong connectivity of a given graph. First, we define two nodes to be //mutually reachable// if there is a path from //u// to //v// and from //v// to //u//. Additionally, note that this property is transitive.  
 + 
 +If every pair of nodes in a graph is mutually reachable, then we call the graph //strongly connected//. To determine whether a given graph is strongly connected, we run the search mentioned above from //s// in //G// and in //G<sup>rev</sup>//, and if either fails to reach every node, then //G// is not strongly connected.  
 + 
 +So, we can define the //strong component// of a given node to be the set of all other nodes to which it is strongly reachable. This is similar to the connected components of undirected graphs. Then, we have that: 
 + 
 +For any two nodes //s// and //t// in a directed graph, their strong components are either identical or disjoint. 
 + 
 +Just as with the connected components, we can determine all of the strong components of a directed graph in, wait for it, //O(m + n)// time.  
 + 
 +This was a lovely chapter that expounded on strong connectivity very smoothly; 10/10.  
 + 
 + 
 + 
 +===== 3.6: Directed Acyclic Graphs Topological Ordering ===== 
 +A directed graph with no cycles is //directed acyclic graph// or //DAG//. We define a //topological ordering// of a directed graph to be an ordering of all of its nodes as //v<sub>1</sub>, v<sub>2</sub>, ..., v<sub>n</sub>// so that for every edge //(v<sub>i</sub>, v<subj</sub>)//, we have that //i < j//. This immediately leads to the fact that if //G// has a topological ordering, then //G// is a //DAG//. 
 + 
 +From the fact that a //DAG// must have a node with no incoming edges, we then find the stronger statement that: 
 + 
 +//G// is a //DAG// iff it has a topological ordering.  
 + 
 +==Algorithm to compute topological ordering of G== 
 +''Find a node //v// with no incoming edges and order it first\\ 
 +Delete //v// from //G// 
 +Recursively compute topological ordering of //G-{v}// and append this after //v//'' 
 + 
 +Thanks to the above theorem, we know that there will always be such a node //v// to compute. The total running time is //O(n<sup>2</sup>)//, and if //G// is very dense (//Theta(n<sup>2</sup>// edges) then this is linear in the size of the input. If the graph is more sparse though, then the //O(m + n)// could be a very good improvement. 
 + 
 +We can actually achieve this //O(m + n)// time if we are more efficient in finding the nodes that have no incoming edges. We simply maintain for each node the number of incoming edges that it has. Then that number iterates downwards when one its incoming neighbor is deleted, to where we are aware of which node has zero.  
 + 
 +This section also did a great job of explaining, and I am a big fan of the giving us the general algorithm, and then a method to make it more efficient. This will prove very beneficial in developing/analyzing them in the future as they grow more complex; 9/10. 
 + 
courses/cs211/winter2018/journals/beckg/ch3.1517961329.txt.gz · Last modified: by beckg
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0