Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
courses:cs211:winter2018:journals:thetfordt:chapter4 [2018/03/12 00:17] thetfordtcourses:cs211:winter2018:journals:thetfordt:chapter4 [2018/03/12 00:17] (current) thetfordt
Line 96: Line 96:
 This leads us to the bottom-up approach, AKA Huffman codes. Huffman developed and proved the optimality of his algorithm by constructing the prefix tree from the bottom up. Suppose we are given an alphabet S with given frequencies. If S has two letters then we encode one letter using 0 and the other letter using 1 (base case). Otherwise, we let y* and z* be the two lowest-frequency letters. We form a new alphabet S' by deleting y* and z* and replace them with a new letter w of frequency f(y*)+f(z*). We then recursively construct a prefix code m' for S', with tree T' and define a prefix code for S as follows: Start with T', take the leaf labeled w and add two children below it labeled y* and z*. We can see that this algorithm addresses exactly the kind of recursive problem the chapter began by describing. The book follows the logic of the algorithm by proving the optimality of it. It does this by proving the Huffman code for a given alphabet achieves the minimum average number of bits per letter of any prefix code. This leads us to the bottom-up approach, AKA Huffman codes. Huffman developed and proved the optimality of his algorithm by constructing the prefix tree from the bottom up. Suppose we are given an alphabet S with given frequencies. If S has two letters then we encode one letter using 0 and the other letter using 1 (base case). Otherwise, we let y* and z* be the two lowest-frequency letters. We form a new alphabet S' by deleting y* and z* and replace them with a new letter w of frequency f(y*)+f(z*). We then recursively construct a prefix code m' for S', with tree T' and define a prefix code for S as follows: Start with T', take the leaf labeled w and add two children below it labeled y* and z*. We can see that this algorithm addresses exactly the kind of recursive problem the chapter began by describing. The book follows the logic of the algorithm by proving the optimality of it. It does this by proving the Huffman code for a given alphabet achieves the minimum average number of bits per letter of any prefix code.
  
-The book then walks through implementation, first stating that it appears this would run in O(k^2) time. However, if we use an implementation of priority queues via heapsn, we can actually reduce the running time down to O(k log k).+The book then walks through implementation, first stating that it appears this would run in O(k^2) time. However, if we use an implementation of priority queues via heaps, we can actually reduce the running time down to O(k log k).
  
 In the section mentioning some extensions of Huffman codes, the idea of arithmetic encoding really jumped out to me. I would think that a series of black and white pixels would have to be represented by 0's and 1's for black and white. However, I did not think of the fact that you could represent certain common patterns of whites and blacks using prefix codes. In the section mentioning some extensions of Huffman codes, the idea of arithmetic encoding really jumped out to me. I would think that a series of black and white pixels would have to be represented by 0's and 1's for black and white. However, I did not think of the fact that you could represent certain common patterns of whites and blacks using prefix codes.
courses/cs211/winter2018/journals/thetfordt/chapter4.1520813868.txt.gz · Last modified: by thetfordt
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0