Table of Contents
Chapter 4
4.1: Interval Scheduling: The Greedy Algorithm Stays Ahead
- A greedy algorithm is the optimum algorithm for a given problem
- Starts with a simple rule then continues using that rule to solve the rest of the equation.
- In this particular problem the greedy algorithm is the algorithm which produces the fewest conflicts between intervals.
- To prove that an algorithm is optimal one must prove that the greedy algorithm always “stays ahead”
- These proofs are on pages 120 to 121
- Of course there exist certain circumstances presented by outside factors that can affect the optimality of a greedy algorithm
4.2: Scheduling to Minimize Lateness: An Exchange Argument
- This is a different take on the scheduling algorithm describe above
- Instead of a definite start and end time these tasks just have deadlines
- The optimal way to schedule this is one in which the tasks are sorted in increasing order of deadlines
4.3: Optimal Caching: A More Complex Exchange Argument
- This problem involves requests in the form of cache maintenance
- Small amount of data can be accessed very quickly while a large amount of data requires more time to access
- Caching is a term for the process of storing a small amount of data in a fast memory
- Cache maintenance algorithms determine what to keep in the cache and what to take out in exchange for other pieces of memory
- A simple rule to create a greedy algorithm for this problem is when a job needs to be brought into the cache, take out the job that will need to be done the furthest into the future
- This is called the Farthest-in-Future Algorithm
- The proof of optimality for this algorithm can be found on pages 135-136
4.4: Shortest Paths in a Graph
- Dijkstra's algorithm - a greedy algorithm to solve the shortest path problem
- The details of this algorithm are explained on pages 137 and 138
- Creates a minimum spanning tree from which you can find the shortest paths between any two points in a graph
Comments
Yet again this reading assignment did a lot to help refresh my memory of the topics we covered in class. The topic of greedy algorithms, in particular Dijkstra's algorithm, as well as the concept of minimum spanning trees are very easy concepts for me to understand and therefore this assignment was the easiest so far to power through. Readability: 9/10
4.5: The Minimum Spanning Tree Problem
- Problem of building a network where everyone is connected but it is built as cheaply as possible (no redundant edges/ smallest possible edges)
- Quoting the book: “The problem is to find a subset of the edges in a graph so that the graph is connected and the total cost is as small as possible.
- Proof of this can be found on 142-143
- Several possible greedy algorithms can solve this problem optimally
- Kruskal's Algorithm - builds spanning tree by adding edges in order of increasing cost as long as the edge being added does not create a cycle. If a cycle is created we simply discard this edge and continue with the algorithm
- Prim's Algorithm - start with a root node and grow a tree outward from s in a greedy fashion.
- Reverse-Delete Algorithm - “backward version of Kruskal's Algorithm.” Start with the full graph and delete the highest cost edges that can be deleted without disconnecting the graph.
- Proof and analysis of these algorithms can be found on pages 144 through 149.
- Both Prim and Kruskal algorithms run in O(mlogn) time
4.6: Implementing Kruskal's Algorithm: The Union-Find Data Structure
- Union-Find structure - stores a representation of the components in a way that supports rapid searching and updating.
- This data structure allows us to rapidly determine whether or not two nodes are in the same set.
- Proof and implementation of this data structure can be found on 152-156
- Using this data structure to implement Kruskal's can be found on page 157
4.7: Clustering
- “Clustering arises whenever one has a collection of objects that one is trying to classify or organize into coherent groups
- Distance function - objects at a larger distance from one another are less similar to each other.
- The clustering problem seeks to group like objects together at a short distance while keeping dissimilar objects at a much farther distance from each other
- k - clustering: partition of a set of objects that must be partitioned into k non-empty sets with the maximum possible spacing
- Analysis and proof of this clustering algorithm can be found on pages 159 and 160
4.8: Huffman Codes and Data Compression
- Data-compression - encoding the symbols that we interpret in our everyday life into bits that a computer can understand and manipulate
- Huffman coding seeks to use shortest path/ minimum spanning tree problems in order to create a tree where those symbols that are most common have smaller bit sizes and those that are less common have larger bit sizes by using a tree where going left from the node you start at adds a 1 bit to the code and going right adds a 0 bit or vice versa depending on its implementation
- This code, if executed properly would make it much easier and cheaper, for instance, to encode an E or an A (common letters) than it is to encode a letter such as a Q or a symbol like a !
- Morse code is a quasi example of this
- Must map letters to bit strings in such a way that no encoding is a prefix of another, hence prefix codes.
- The design, analysis, and proof of this particular problem can be found at the end of 4.8 starting on page 166
Comments
Readability: 9/10
This chapter was exceptionally readable and the topic of huffman codes is interesting and still fresh in my mind from CS 112. All in all, another great refresher of what I've learned from class lectures.