Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
courses:cs211:winter2011:journals:andrew:chapter4 [2011/02/16 01:59] bennettacourses:cs211:winter2011:journals:andrew:chapter4 [2011/03/01 22:13] (current) bennetta
Line 29: Line 29:
  
 ===== Comments ===== ===== Comments =====
-Yet again this reading assignment did a lot to help refresh my memory of the topics we covered in class. The topic of greedy algorithms, in particular Dijkstra's algorithm, as well as the concept of minimum spanning trees are very easy concepts for me to understand and therefor this assignment was the easiest so far to power through. Readability: 9/10+Yet again this reading assignment did a lot to help refresh my memory of the topics we covered in class. The topic of greedy algorithms, in particular Dijkstra's algorithm, as well as the concept of minimum spanning trees are very easy concepts for me to understand and therefore this assignment was the easiest so far to power through. Readability: 9/10 
 + 
 +===== 4.5: The Minimum Spanning Tree Problem ===== 
 +   *Problem of building a network where everyone is connected but it is built as cheaply as possible (no redundant edges/ smallest possible edges)\\ 
 +   *Quoting the book: "The problem is to find a subset of the edges in a graph so that the graph is connected and the total cost is as small as possible.\\ 
 +     *Proof of this can be found on 142-143\\ 
 +   *Several possible greedy algorithms can solve this problem optimally\\  
 +   *Kruskal's Algorithm - builds spanning tree by adding edges in order of increasing cost as long as the edge being added does not create a cycle. If a cycle is created we simply discard this edge and continue with the algorithm\\ 
 +   *Prim's Algorithm - start with a root node and grow a tree outward from s in a greedy fashion.\\  
 +   *Reverse-Delete Algorithm - "backward version of Kruskal's Algorithm." Start with the full graph and delete the highest cost edges that can be deleted without disconnecting the graph. \\ 
 +   *Proof and analysis of these algorithms can be found on pages 144 through 149.\\  
 +   *Both Prim and Kruskal algorithms run in O(mlogn) time\\ 
 + 
 +===== 4.6: Implementing Kruskal's Algorithm: The Union-Find Data Structure ===== 
 +   *Union-Find structure - stores a representation of the components in a way that supports rapid searching and updating.\\ 
 +   *This data structure allows us to rapidly determine whether or not two nodes are in the same set.\\ 
 +   *Proof and implementation of this data structure can be found on 152-156\\ 
 +   *Using this data structure to implement Kruskal's can be found on page 157\\ 
 + 
 +===== 4.7: Clustering ===== 
 +   *"Clustering arises whenever one has a collection of objects that one is trying to classify or organize into coherent groups\\ 
 +   *Distance function - objects at a larger distance from one another are less similar to each other.\\  
 +   *The clustering problem seeks to group like objects together at a short distance while keeping dissimilar objects at a much farther distance from each other\\ 
 +   *k - clustering: partition of a set of objects that must be partitioned into k non-empty sets with the maximum possible spacing\\ 
 +   *Analysis and proof of this clustering algorithm can be found on pages 159 and 160\\ 
 + 
 +===== 4.8: Huffman Codes and Data Compression ===== 
 +   *Data-compression - encoding the symbols that we interpret in our everyday life into bits that a computer can understand and manipulate\\ 
 +   *Huffman coding seeks to use shortest path/ minimum spanning tree problems in order to create a tree where those symbols that are most common have smaller bit sizes and those that are less common have larger bit sizes by using a tree where going left from the node you start at adds a 1 bit to the code and going right adds a 0 bit or vice versa depending on its implementation\\ 
 +   *This code, if executed properly would make it much easier and cheaper, for instance, to encode an E or an A (common letters) than it is to encode a letter such as a Q or a symbol like a !\\ 
 +   *Morse code is a quasi example of this\\ 
 +   *Must map letters to bit strings in such a way that no encoding is a prefix of another, hence prefix codes.\\ 
 +   *The design, analysis, and proof of this particular problem can be found at the end of 4.8 starting on page 166\\ 
 + 
 +===== Comments ===== 
 +Readability: 9/10\\ 
 +This chapter was exceptionally readable and the topic of huffman codes is interesting and still fresh in my mind from CS 112. All in all, another great refresher of what I've learned from class lectures. 
  
courses/cs211/winter2011/journals/andrew/chapter4.txt · Last modified: 2011/03/01 22:13 by bennetta
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0