Right off the bat, the authors state that defining a greedy algorithm is exceedingly difficult. A greedy algorithm is characterized by the way that it creates a solution to a problem, which is building up a solution in small steps which select a local maximum of a certain aspect of the inputs. Interestingly, this implies about the original problem that there is a rule that governs local decisions which helps make the optimal solution to the problem.
There are two main strategies for proving the sometimes surprising conclusion that the greedy algorithm produces an optimal solution. The first is referred to as greedy stays ahead. This strategy measures the greedy algorithm's progress step-by-step and proves that the greedy algorithm produces a better solution at every step than any other algorithm is able to. The other strategy to prove a greedy solution's optimality is the exchange argument. This begins with any solution to the problem and transforms it into the greedy solution all the while not making the solution less optimal.
This section was pretty easy to understand since it did not actually delve too deep into technical concepts or rely heavily on notation.
This section examines the Interval Scheduling Problem, which aims to find the largest subset of intervals that do not overlap one another in time. We must come upon a rule around which to design the greedy algorithm, and a few are suggested.
One possible rule is selecting the earliest starting times, but this will not work because the interval that begins first may end last, so none of the other intervals will be completed. In this example, as long as there are two or more other intervals that do not overlap, using the aforementioned rule will not yield an optimal solution.
Another rule that could be used is selecting the intervals in order of size – smallest to largest. This rule could lead us to select only one interval which, while small, overlaps two non-overlapping intervals and so it yields a subset of size 1 which is not the optimal solution.
A third solution is choosing intervals in order of fewest conflicts. This one will not work either because it does not take into account end times, which are important in anchoring this solution.
The rule that will work is selection of intervals based solely on which intervals end first. It can be proven that this solution is optimal by proving that the greedy algorithm will provide a solution which contains the same number of intervals as the optimal solution. The greedy algorithm will start by choosing the first interval to end, and then by choosing an interval that ends at the same time as the one in the same index position in the optimal set or earlier. The greedy algorithm stays ahead here by choosing the same interval as the one in the optimal set as its latest possible interval for that index position. Thus, the worst set that the greedy can pick is an optimal set and by definition it is also the best set the algorithm can pick. This means that the greedy algorithm always pick the optimal set.
It runs in O(n log n) time. The algorithm must sort the inputs based on ending time, an O(n log n) operation. It then creates an array in linear time and then selects the first interval and iterates through the rest of the intervals, selecting intervals whose start is greater than the previously selected interval's finish.
If we wanted to schedule all intervals using the fewest resources possible, the algorithm would only need a slight modification. This modification involves assigning a different label to each interval that overlaps and thus each label corresponds to a different “layer” or resource that the interval will occupy for the specified time.
This problem gives a set of requests which all may begin at time s and must use the resource for a specific interval of time. The difference from the previous problem is that instead of a start and end time, these requests only come with a deadline before which they must be completed. These criteria allow us a few possible rules by which we may define the greedy algorithm.
We may order the jobs by increasing length. This will not work because it is insensitive to deadlines. So one short job with a long deadline will be scheduled before a long job with a short deadline which may lead to the second job being scheduled after the first and the second job being late while the first job has excess slack.
Maybe they should be scheduled in order of slack, or the difference between the deadline and the length of the interval. This seems better since it takes deadline into account, however it will still not work. The counter example to this solution is a set of intervals, one which takes one hour and is due after two hours and one which takes three hours and is due after three hours. The second interval mentioned would be scheduled first since it has no slack and the other interval would be two hours late. If the two intervals were switched, that would result in minimizing maximum lateness, since the second interval would only be one hour late.
The rule we should use is scheduling the intervals with the earliest deadlines first. The algorithm will schedule one task as soon as the previous one is completed, and so there will be no idle time. Additionally, two different schedules with neither inversions nor idle time will only differ only in the order which they schedule jobs with identical deadlines and jobs with the same deadlines are scheduled consecutively. This does not affect maximum lateness.
Since there is an optimal schedule that has no inversions and no idle time and all schedules with no inversions and no idle time have the same maximum lateness, the schedule produced by the greedy algorithm which will have no inversions and no idle time will be an optimal schedule.
This algorithm will run in O(n log n) time, since the costliest step in the algorithm will be sorting the intervals by deadline.
This was a dense section which took me a while to get through due to the level of technicality and specification of some of the proofs. It required a good deal of attention and focus at some points, including the proof for statement 4.9.
A minimum spanning tree is a tree that includes the smallest subset of edges that connect all the edges in the original graph. Additionally, in order to determine the minimum spanning tree of a graph all the edges in the graph must be strictly greater than zero and of distinct lengths.
This section proposes three algorithms to solve the graph for its minimum spanning tree, all of which will return a correct MST. The first algorithm sketched out is Kruskal's Algorithm, which successively adds edges starting with the least costly and adding edges so long as they do not create a cycle with edges that have already been inserted.
The next algorithm outlined is Prim's Algorithm. This one is similar to Dijkstra's in that it starts at a specified node and add the shortest edge that leads out from the current node to a node not yet visited. This algorithm also can not create a cycle since that would involve adding an edge that leads from the current node to an already-explored node.
The last algorithm discussed in this section is the Reverse-Delete Algorithm. As its name suggests, this algorithm works by deleting edges in the reverse order that they were added in the previous algorithms: from most expensive to least. It will not delete and edge that should otherwise be deleted according to the cost ordering if that edge will create a disconnected graph and this algorithm will terminate when the only remaining edges would disconnect the graph if deleted.
Additionally, the cut and cycle properties are treated by this section. The cut property says that S is any subset of nodes and e is the minimum cost edge with exactly one endpoint in S. Then the MST must contain e. The cycle property on the other hand says that C is any cycle and f is the max cost edge in C. Then the MST can not contain f.
having the cut property, we can now say that Kruskal's and Prim's Algorithms produce minimum spanning trees of their graphs. Prim's algorithm works simply by starting at any node in the graph and applying the cut property, adding the cheapest edge in the cutset of explores set s then adding the newly explored node to s. Kruskal's employs both properties; each edge falls into one of the two cases. Once the edges are ordered from smallest to largest, they will either complete a cycle or not. If the current edge being examined does complete a cycle, then it will be discarded because of the cycle property. If it does not, then it will be added to the MST by the cut property.
The cycle property gives us insight into why the reverse delete algorithm must be true. When an edge e is deleted, it lies on a cycle c and since the edges are ordered and e is the first edge on the cycle that is encountered, then e must be the most costly edge on c and by the cycle property, it does not belong in a minimum spanning tree.
Prim's algorithm can be implemented so that it runs in O(m log n) time for m edges and n nodes. This is plain due to its similarities with Dijkstra's algorithm which runs in the same time. Basically, using a heap-based priority queue gives O(log n) time for ExtractMin and ChangeKey, both of which run inside a m-time loop yielding a total run time of O(m log n).
This section was quite dense to cut through due to the lengthy and notation-dependent proofs. The pictures helped somewhat but I did have to resort to class notes to demystify some of what was trying to be conveyed in the proofs.
Kruskal's Algorithm requires a way to rapidly identify the connected components to which two nodes on either end of an edge belong. This requirement is met by a data structure called the Union-Find Data Structure. This data structure supports three operations: MakeUnionFind(S) which returns a Union-Find data structure on the set S where each element is in its own set, Find(u) which returns the name of the set containing node u, and Union(A, B) which merges sets A and B into a single set.
The chapter discusses two different implementations for the Union-Find data structure, but the first is largely introduced as a comparative tool for the operation of the second so I will omit it in these notes since many components are similar.
The Union-Find data structure favored in the section makes use of pointers. Initially, the MakeUnionFind(S) operation makes a record for each element in the set and sets each elements pointer to point to the element itself. When two disjoint sets are united, the pointer of one of them will be reset to point to the other node, which now is the name of the set. The rest of the pointers in the joined sets do not need to be reset. This means that now to find the name of the set which contains a node u, pointers must be followed until there are no more remaining, at which point we have reached the eponymous node of the set. For this implementation, Union will require constant time since it is merely updating a pointer, MakeUnionFind(S) will require linear time on the nodes of the set and Find will require O(log n) time. There is a way to improve the running time of the Find function, however. This process is called compression of the path traversed by a call of the Find function and it works by resetting all the pointers along that does not change the O(log n) worst-case running time. The real gain comes from subsequent calls of Find along that path, since they now run in O(k) time!
To implement Kruskal's algorithm using a Union-Find data structure, the edges must first be sorted which takes O(m log n) time. Then, the ends of each edge are checked for their connected component using a Find call for each. If they are not in the same connected component then the connected components are united with a Union call on the Finds for each node. There will be at most 2 Find calls for each edge and n-1 Union calls and using either the array-based Union-Find or the pointer-based implementation of the data structure will yield a O(m log n) running time.
I didn't find this chapter to be too dense and the diagrams filled in the gaps fairly well so I would give it a reasonably high readability.
Clustering problems revolves around classifying objects into groups based on their distance from one another. For some collections of items, this distance can be physical but for many others, the distance is a measure of similarity for which a distance function must be defined. The idea of a clustering is that items more like each other will be in the same cluster, while items less like each other will be in disparate clusters. The trouble is finding an efficient way to partition these objects into a specified number k clusters.
It turns out that deleting the k-1 costliest edges from Kruskal's algorithm's minimum spanning tree will return a k-clustering of maximum spacing, with spacing being defined as the minimum distance between any pair of points lying in different clusters. The last k-1 edges are deleted from the tree returned by the algorithm because they are the most expensive edges and therefore their distance will be the longest in a weighted graph. We do not want these edges for a k-clustering because they would create a single connected component where k different connected components are desired.
Since a clustering is yielded by an execution of Kruskal's algorithm that simply terminates before adding the last k-1 edges, it will use the same O(m log n) time that normal Kruskal's algorithm uses.
I found this chapter surprisingly intuitive and stimulating and thus it was easily readable.
This section begins by discussing different ways which one may encode data. One example discussed is the scheme using 1's and 0's which allows one to encode 2b symbols using b bits. Another scheme is one such as Morse Code, which uses different lengths of strings to encode different characters, based on frequency. Letters that occur more frequently, such as e, t, and a, use one- or two-bit strings while less frequent letters use four or five bit strings. This type of encoding is advantageous because the more frequent letters are quicker to encode, which saves time in the long run.
The problem with such an encoding is that it is sometimes ambiguous. Since e is 0, t is 1, and 01 is a, a string 01 may either be et or a. The solution to this problem is adding a slight pause in between letters, but if the pause isn't long enough it may be missed and additionally the pause becomes its own type of bit.
Such an ambiguity can be overcome by a construction called a prefix code. A prefix code is one in which the encoding for one letter in the alphabet can not be a prefix for another letter. In the Morse code example, e's encoding - 0, was a prefix for the encoding of a - 01. An example of a prefix code for the alphabet {a, b, c} could be a = 11, b = 01, c = 00. Using variable length encoding, it is possible to make an optimal prefix code by setting the most used letters to represent the shortest characters.
A metric known as the ABL, or Average Bits per Letter, can be used to compare different variable length encodings and the lowest ABL is the optimal encoding. ABL is found by multiplying the length of each letter's encoding by its frequency in the document being encoded and then summing them together. There are two ways to accomplish this: Shannon-Fano codes and Huffman codes. Both these strategies involve building trees and assigning the a 0 to the left child and a 1 to the right child, but the way they build the trees differs vastly.
The Shannon-Fano codes split the set of letters into two subsets of equal frequency and then recall themselves recursively on the subsets until each letter gets its own encoding. This top-down approach was a good start but a precursor to the superior Huffman codes. The important difference between Huffman codes and Shannon-Fano codes is that Huffman codes build up from the bottom, assigning the two least frequent letters to be children of a meta-letter and re-adding the meta-letter as a component of the alphabet for the algorithm to recursively operate on.
This algorithm lends itself to the use of a priority queue because letters can be added into the priority queue with their frequencies being their keys. Now all that remains is insertion and deletion, both of which occur in O(log n) time. Since the algorithm is recursive and adds meta-letters back into the alphabet, it will carry out n loops which each use the aforementioned O(log n) time, thus giving O(n log n) total time.