Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
courses:cs211:winter2011:journals:andrew:chapter4 [2011/02/16 01:48] – bennetta | courses:cs211:winter2011:journals:andrew:chapter4 [2011/03/01 22:13] (current) – bennetta | ||
---|---|---|---|
Line 15: | Line 15: | ||
===== 4.3: Optimal Caching: A More Complex Exchange Argument ===== | ===== 4.3: Optimal Caching: A More Complex Exchange Argument ===== | ||
- | | + | |
+ | * Small amount of data can be accessed very quickly while a large amount of data requires more time to access\\ | ||
+ | * Caching is a term for the process of storing a small amount of data in a fast memory\\ | ||
+ | * Cache maintenance algorithms determine what to keep in the cache and what to take out in exchange for other pieces of memory\\ | ||
+ | * A simple rule to create a greedy algorithm for this problem is when a job needs to be brought into the cache, take out the job that will need to be done the furthest into the future\\ | ||
+ | * This is called the Farthest-in-Future Algorithm\\ | ||
+ | * The proof of optimality for this algorithm can be found on pages 135-136\\ | ||
+ | |||
+ | ===== 4.4: Shortest Paths in a Graph ===== | ||
+ | * Dijkstra' | ||
+ | * The details of this algorithm are explained on pages 137 and 138\\ | ||
+ | * Creates a minimum spanning tree from which you can find the shortest paths between any two points in a graph\\ | ||
+ | |||
+ | ===== Comments ===== | ||
+ | Yet again this reading assignment did a lot to help refresh my memory of the topics we covered in class. The topic of greedy algorithms, in particular Dijkstra' | ||
+ | |||
+ | ===== 4.5: The Minimum Spanning Tree Problem ===== | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | *Both Prim and Kruskal algorithms run in O(mlogn) time\\ | ||
+ | |||
+ | ===== 4.6: Implementing Kruskal' | ||
+ | | ||
+ | *This data structure allows us to rapidly determine whether or not two nodes are in the same set.\\ | ||
+ | | ||
+ | | ||
+ | |||
+ | ===== 4.7: Clustering ===== | ||
+ | | ||
+ | | ||
+ | *The clustering problem seeks to group like objects together at a short distance while keeping dissimilar objects at a much farther distance from each other\\ | ||
+ | *k - clustering: partition of a set of objects that must be partitioned into k non-empty sets with the maximum possible spacing\\ | ||
+ | | ||
+ | |||
+ | ===== 4.8: Huffman Codes and Data Compression ===== | ||
+ | | ||
+ | | ||
+ | *This code, if executed properly would make it much easier and cheaper, for instance, to encode an E or an A (common letters) than it is to encode a letter such as a Q or a symbol like a !\\ | ||
+ | | ||
+ | *Must map letters to bit strings in such a way that no encoding is a prefix of another, hence prefix codes.\\ | ||
+ | *The design, analysis, and proof of this particular problem can be found at the end of 4.8 starting on page 166\\ | ||
+ | |||
+ | ===== Comments ===== | ||
+ | Readability: | ||
+ | This chapter was exceptionally readable and the topic of huffman codes is interesting and still fresh in my mind from CS 112. All in all, another great refresher of what I've learned from class lectures. |