Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2018:journals:cantrella:chapter_4 [2018/02/27 04:37] – [Section 4.4] cantrellacourses:cs211:winter2018:journals:cantrella:chapter_4 [2018/03/12 15:52] (current) – [Section 4.8] cantrella
Line 14: Line 14:
  
 I give this section a 7 for readability and a 7 on the interesting scale. I give this section a 7 for readability and a 7 on the interesting scale.
 +===== Section 4.5 =====
 +Section 4.5 covers the three ways to create a minimum spanning tree from some fully connected graph. The three algorithms used are//Prim's Algorithm//, //Kruskal's Algorithm//, and the //Reverse-Delete Algorithm//. All the algorithms are greedy by nature, and all three were covered in class beforehand. The chapter goes into various proofs that are used to assert the correctness of the three algorithms, but the most important points are the proving of the cut and cycle properties. The cut property is to edges that are part of the minimum spanning tree, while the cycle property proves that a cycle is inefficient and allows the deletion of nodes in the //Reverse-Delete Algorithm//.
 +
 +Although this chapter was informative, the fact that we had covered it in class for multiple days made the reading a bit redundant as the material had already been presented. I give this section a 4 on the interesting scale and a 6 on the readability scale.
 +===== Section 4.6 =====
 +Section 4.6 delves into the implementation of Kruskal's algorithm. The new data type introduced in the section is the Union-Find data type. Implemented using pointers, the Union-Find data structure performs three operations: MakeUnionFind(//S//) which creates a Union-Find data structure out of some set of nodes //S//, Find(//u//), and Union(//A, B//). Find can be implemented in O(log//n//) for our purposes and returns the set name which node //u// is in. Union combines two nodes A & B into a single set and can be implemented in O(1). The overall runtime for Kruskal's algorithm is O(//m//log//n//).
 +
 +The in-depth implementation of Kruskal's algorithm was interesting, it helped cement the in-class lecture. I give this section a 7 on readability and a 7 on the interesting scale.
 +===== Section 4.7 =====
 +Section 4.7 covers an implementation of //Kruskal's Algorithm// in the problem of clustering. Clustering (specifically, single-link clustering) is setting up a //k//-clustering of nodes that maximize the total distance between the clusters created. The goal of clustering is to maximize the total distance between the clusters, with the number of clusters being given at the beginning of the algorithm. I am unsure of what the point of having a different number of clusters would be, I thought it would make more sense to allow the number of clusters to be chosen by the algorithm to maximize spacing.
 +
 +I give this section a 6 on the readability scale and a 7 on the interesting scale.
 +===== Section 4.8 =====
 +Section 4.8 covered the theory and implementation of data compression using Huffman Codes. The idea behind this is that when data is sent across networks, it needs to be in its most compressed form so that the transmission is efficient. Huffman codes achieve this by efficiently compressing each letter based on its frequency. The algorithm finds the two least frequency letters and merges them together into a new letter. The algorithm operates in O(//k//log//k//) time, with //k// being the number of letters in the alphabet.
 +
 +I give this section a 7 on the interesting scale and a 6 for readability.
courses/cs211/winter2018/journals/cantrella/chapter_4.1519706271.txt.gz · Last modified: by cantrella
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0