Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| courses:cs211:winter2018:journals:hornsbym:chapter_4 [2018/02/27 04:33] – [Section 4.4 (Shortest Path)] hornsbym | courses:cs211:winter2018:journals:hornsbym:chapter_4 [2018/03/12 02:59] (current) – [Section 4.8 (Huffman Codes)] hornsbym | ||
|---|---|---|---|
| Line 25: | Line 25: | ||
| \\ | \\ | ||
| By using a heap-based priority queue, this algorithm can run in O(//m// log //n//) time. | By using a heap-based priority queue, this algorithm can run in O(//m// log //n//) time. | ||
| + | ===== Section 4.5 (Minimum Spanning Tree) ===== | ||
| + | This section deals with minimum spanning trees and the various algorithms that can produce one.\\ | ||
| + | \\ | ||
| + | The first algorithm is called // | ||
| + | \\ | ||
| + | The second algorithm is called // | ||
| + | \\ | ||
| + | The final algorithm resembles // | ||
| + | \\ | ||
| + | The section then proves correctness and optimality of the algorithms, which can be found on pages 144-149.\\ | ||
| + | \\ | ||
| + | The section concludes with the implementation and runtime of //Kruskal// and // | ||
| + | ===== Section 4.6 (Union-Find Structure) ===== | ||
| + | This section describes the Union-Find structure and its uses. It is well suite to implementing algorithms similar to // | ||
| + | \\ | ||
| + | The Union-Find structure only does exactly what its name implies: union and find operations. It does this by keeping track of disjointed sets (or graphs, in the case of // | ||
| + | \\ | ||
| + | An array-based implementation can be used to run the find operation in O(1) runtime and union in O(n) runtime. This is efficient enough to use on // | ||
| + | \\ | ||
| + | |||
| + | ===== Section 4.7 (Clustering) ===== | ||
| + | ===== Section 4.8 (Huffman Codes) ===== | ||
| + | Huffman codes compress data. Computers operate by producing and reading bits, which are sequences of 0's and 1's. Each sequence is assigned some piece of information that the computer can understand, so the problem is centered around picking the most efficient way of assigning sequences to information so that the most used information is assigned to the least costly sequence. Huffman codes do that exactly.\\ | ||
| + | \\ | ||
| + | Huffman codes use trees to organize and locate data. All data are stored in the leaf nodes of the tree, with the parent nodes being empty. As the computer reads through each bit, it traverses through the tree until it lands on a leaf node. Then, it returns that datum and reads through the next bit. It assigns O and 1 to a traversal to either the left or right child node, which guarantees that no two data will have the same encoding.\\ | ||
| + | \\ | ||
| + | What makes this algorithm work efficiently is that the trees are not always full. The most common data will be placed near the top of the tree, so that they are reached first. The book uses commonly used letters as an example. It would not makes sense for the letter ' | ||
| + | \\ | ||
| + | If we use a heap priority queue to implement Huffman' | ||
