Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2018:journals:hornsbym:chapter_4 [2018/02/27 03:52] – [Section 4.2] hornsbymcourses:cs211:winter2018:journals:hornsbym:chapter_4 [2018/03/12 02:59] (current) – [Section 4.8 (Huffman Codes)] hornsbym
Line 17: Line 17:
 \\ \\
 This section proves that the greedy algorithm is optimal by proving four essential components of the problem. First, we prove that there is an optimal schedule with no idle time. Next, we prove that all schedules with no inversions (a job with an earlier deadline being scheduled AFTER a job with a later one) and no idle time have the same maximum lateness. We prove this via a direct proof stating that the ordering of jobs with the same deadline does not determine lateness. Then, we prove that there is an OPTIMAL schedule with no inversions and no idle time. We do this by direct proof. To conclude the section, we prove that the schedule produced by our greedy algorithm has the optimal maximum lateness. This proof builds off the proofs we completed above: because we proved that there is an optimal schedule with no inversions, and that all schedules with no inversions have the same maximum lateness, the schedule our algorithm returns MUST be optimal. This section proves that the greedy algorithm is optimal by proving four essential components of the problem. First, we prove that there is an optimal schedule with no idle time. Next, we prove that all schedules with no inversions (a job with an earlier deadline being scheduled AFTER a job with a later one) and no idle time have the same maximum lateness. We prove this via a direct proof stating that the ordering of jobs with the same deadline does not determine lateness. Then, we prove that there is an OPTIMAL schedule with no inversions and no idle time. We do this by direct proof. To conclude the section, we prove that the schedule produced by our greedy algorithm has the optimal maximum lateness. This proof builds off the proofs we completed above: because we proved that there is an optimal schedule with no inversions, and that all schedules with no inversions have the same maximum lateness, the schedule our algorithm returns MUST be optimal.
-===== Section 4.4 ===== +===== Section 4.4 (Shortest Path) ===== 
- +This section uses greedy algorithms to solve the Shortest Path problem. We are given a starting node //s//, and a graph of nodes and vertices. The vertices of this graph include a value designating the cost of traversing that node. Our goal is to traverse from //s// to another given node //t//.\\ 
- +\\ 
 +Edsger Dijkstra proposed a greedy solution to the Shortest Path problem in 1959. Basically, this algorithm finds the shortest path from //s// to each other node in the graph, not just //t//. Then, it is easy to determine the overall least-costly route from //s// to //t//. \\ 
 +\\  
 +The specific algorithm is as follows: first, we find the values of all edges connected to //s//. Then, we construct a priority queue of these nodes, organized from least costly path to most costly. Then, we explore all edges connected to the highest priority on the queue, all the while keeping track of the cost to get to each individual node. As we find less costly paths, we update the cost to get to that node. After all shortest paths have been found, we can easily return a list of the shortest path from //s// to any end node //t//.\\ 
 +\\ 
 +By using a heap-based priority queue, this algorithm can run in O(//m// log //n//) time.  
 +===== Section 4.5 (Minimum Spanning Tree) ===== 
 +This section deals with minimum spanning trees and the various algorithms that can produce one.\\ 
 +\\ 
 +The first algorithm is called //Kruskal's Algorithm//. This algorithm starts with all of the edges and no nodes. Then, the algorithm adds the least costly edges first, unless adding an edge would create a cycle. In that case, the edge is not added and the next least-costly edge is considered. The graph returned is a minimum spanning tree.\\ 
 +\\ 
 +The second algorithm is called //Prim's Algorithm//. This algorithm is similar to Dijkstra's algorithm, but slightly less complex. We begin with the completed graph, then explore the least costly edge coming off of that node. We add this node to a set of edges. Then we repeat this process, this time exploring the least costly edge connecting any explored node to an unexplored node. We repeat this process until there are no more explored nodes, and the returned set of edges will be a minimum spanning tree.\\ 
 +\\ 
 +The final algorithm resembles //Kruskal's Algorithm// done in reverse. For this algorithm, we are given a complete graph and then delete the most expensive edge. We repeat this process, making sure each time that we do not completely disconnect any node from the graph. The remaining completed graph will be a minimum spanning tree.\\ 
 +\\ 
 +The section then proves correctness and optimality of the algorithms, which can be found on pages 144-149.\\ 
 +\\ 
 +The section concludes with the implementation and runtime of //Kruskal// and //Prim's Algorithms//. Using a heap-based priority queue to hold the nodes, we can get a running time of O(//m// log//n//). The extractMin and changeKey function work in O(log//n//) time, and each is performed a maximum of //m// times, yielding O(//m// log//n//). 
 +===== Section 4.6 (Union-Find Structure) ===== 
 +This section describes the Union-Find structure and its uses. It is well suite to implementing algorithms similar to //Kruskal's Algorithm//.\\ 
 +\\ 
 +The Union-Find structure only does exactly what its name implies: union and find operations. It does this by keeping track of disjointed sets (or graphs, in the case of //Kruskal's//). In addition to these functions, Union-Find will also have a makeUnionFind. MakeUnionFind will take a graph as an input, then return a Union-Find data structure in which each node is contained within its own set.\\ 
 +\\ 
 +An array-based implementation can be used to run the find operation in O(1) runtime and union in O(n) runtime. This is efficient enough to use on //Kruskal's Algorthm//, but it is not the most optimal implementation. The link-based implementation runs find in O(log//n//), which is worse than the array based implementation, but runs union in O(1). We are able to gain such an improved union runtime because we only have to create a link from one set to the next, as opposed to merging the two sets linearly.\\ 
 +\\
  
 +===== Section 4.7 (Clustering) =====
 +===== Section 4.8 (Huffman Codes) =====
 +Huffman codes compress data. Computers operate by producing and reading bits, which are sequences of 0's and 1's. Each sequence is assigned some piece of information that the computer can understand, so the problem is centered around picking the most efficient way of assigning sequences to information so that the most used information is assigned to the least costly sequence. Huffman codes do that exactly.\\
 +\\
 +Huffman codes use trees to organize and locate data. All data are stored in the leaf nodes of the tree, with the parent nodes being empty. As the computer reads through each bit, it traverses through the tree until it lands on a leaf node. Then, it returns that datum and reads through the next bit. It assigns O and 1 to a traversal to either the left or right child node, which guarantees that no two data will have the same encoding.\\
 +\\
 +What makes this algorithm work efficiently is that the trees are not always full. The most common data will be placed near the top of the tree, so that they are reached first. The book uses commonly used letters as an example. It would not makes sense for the letter 'e' to take up the same amount of space as a 'z' or 'q', since 'e' is more commonly used. Therefore, the trees can be somewhat skinny, but any given input will most likely not end up traversing the entire tree.\\
 +\\
 +If we use a heap priority queue to implement Huffman's algorithm, we can achieve O(//n// log//n//), with //n// representing the letters of the given alphabet. Heaps are trees that allow us to make insertions and deletions in O(log//n//) time. These operations are ideal for Huffman's algorithm, and so priority queues are the ideal data structure to use.
courses/cs211/winter2018/journals/hornsbym/chapter_4.1519703555.txt.gz · Last modified: by hornsbym
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0