Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2018:journals:donohuem:chapter4 [2018/03/06 00:07] donohuemcourses:cs211:winter2018:journals:donohuem:chapter4 [2018/03/11 21:59] (current) – [4.8 Huffman Codes] donohuem
Line 48: Line 48:
  
 ===== 4.7 Clustering ===== ===== 4.7 Clustering =====
-Motivation: We wish to organize groups of data into coherent groups. One such way to categorize data, in the context of geographical points, is differentiating clusters by distance. This gives way to the Clusterings of Maximum Spacing problem. Given a set of U objects, we divide U into k groups. The spacing of a k-clustering is defined as the minimum distance between any pair of points lying in different clusters (This makes absolutely no sense to me). The algorithm is similar to the Minimum Spanning Tree algorithms, specifically Kruskal's. Readability: 5/10.+Motivation: We wish to organize groups of data into coherent groups. One such way to categorize data, in the context of geographical points, is differentiating clusters by distance. This gives way to the Clusterings of Maximum Spacing problem. Given a set of U objects, we divide U into k groups. The spacing of a k-clustering is defined as the minimum distance between any pair of points lying in different clusters (this makes absolutely no sense to me). The algorithm is similar to the Minimum Spanning Tree algorithms, specifically Kruskal's. Readability: 5/10. 
 + 
 + 
 + 
 +===== 4.8 Huffman Codes ===== 
 +Our problem is to encode symbols (letters of the alphabet) into bits. However, there are two important stipulations we must meet. First, we want our encoding to be as concise as possible (minimize ABL). This means that frequently used letters like e, t, a, o, and i should be smaller in size than less frequently used letters. Second, there should be no ambiguity in our encoding. Encoded symbols should be unique and not confusable with other symbols. To meet these two requirements, we use //Optimal Prefix Codes//, where each prefix of an encoded letter is distinct and the size of each prefix corresponds to a letter's frequency. An algorithm that creates optimal prefix codes would utilize a binary tree. More specifically, we would take the two most frequent letters, make them leaf nodes in the tree, have their parent be their combined frequencies, then recursively repeat for the remaining letters. Huffman's Algorithm: 
 + 
 + 
 +     if S has two letters: 
 +        Encode one letter as 0 and the other letter as 1 
 +     else: 
 +        Let y* and z* be the two lowest-frequency letters 
 +        Form a new alphabet S’ by deleted y* and z* and replacing 
 +         them with a new letter w of freq fy* + fz* 
 +        Recursively construct a prefix code y’ for S’ with tree T’ 
 +        Define a prefix code for S as follows: 
 +            Start with T’ 
 +            Take the leaf labeled w and add two children below it 
 +             labeled y* and z* 
 + 
 +The run time of this algorithm can be O(klogk) when it uses a priority queue, where k is the number of letters in the alphabet. The readability of this section is 6/10.
  
  
  
courses/cs211/winter2018/journals/donohuem/chapter4.1520294873.txt.gz · Last modified: by donohuem
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0