This is an old revision of the document!


Chapter 4: Greedy Algorithms

This chapter is dedicated to the topic of which most of our upbringings have probably told us to avoid: greediness. But, this type of greedy is more good than bad, a paradox as to what one would normally think when hearing the word “greedy.” Greedy algorithms are those that search for a solution to a problem, taking it one step at a time, attempting to optimize each step along the way. These algorithms will always be at least close to optimal if not optimal. Several specific topics dealing with greedy algorithms will be discussed, such as “greedy stays ahead” and applications like finding the shortest path in a graph.

4.1: Interval Scheduling: The Greedy Algorithm Stays Ahead

The first application of a greedy algorithm was that involving scheduling an interval of time in the most efficient way, fitting in the most tasks in the least amount of time possible. Though it might seem intuitive to select first the job that starts the earliest, this is not optimal, as the first job could be a very long interval. Selecting the smallest interval also proves not to be optimal, as it can leave out other intervals, preventing the optimal solution, along with trying to select the jobs that compete with the others the least.

Thus, the way to select intervals is actually by looking at their finishing time, and selecting that of which is the smallest. In the proof, we see that greedy does indeed always finish at least at the same time as optimal solution, if not better, making it the optimal solution.

Through this, it is shown that the algorithm runs in O(nlogn) time. The requests are sorted (n number of requests) by finishing time and labeled as so, giving us this runtime.

The next problem looked at in the book was where we'd like to schedule each task and must decide how many resources we need to fulfill this. The example given, as in class, was that of classrooms needed for a number of lectures. The most useful term needed for determining this is depth; that is, the largest number of overlapping of tasks. This will be how many resources are needed to most efficiently solve the problem.

To design this algorithm, we first label each task, assigning a different label to each one that overlaps. This will give us d labels, the maximum number of resources needed (our depth), and proves that it is always using the minimum amount of labels.

Overall, this section was pretty straight-forward but provided good insight, receiving the respectable score of 8/10.

4.2: Scheduling to Minimize Lateness: An Exchange Argument

To start off, we went back to looking at the first problem we looked at in the previous section. The difference here is that jobs are assigned a deadline and considered late if they finish after the deadline. The desired solution to the construction of the greedy algorithm for this is to sort the jobs so that they are in order of increasing deadline. This will give us the earliest deadline first, and attempt to make sure that all jobs finish before their scheduled deadline. Of course, that will not always be possible and there will be some jobs which are late.

This algorithm places jobs one right after the other, producing no gaps or “idle time.” It will also prevent any “inversions,” or where a job later has an earlier deadline than an earlier job. Any swapping to prevent inversions will not increase the maximum lateness, as it will only be putting jobs with earlier deadlines… well, earlier. Thus, we are left with an optimum maximum lateness L, given by the greedy algorithm.

This section was shorter and a relatively easy read, besides some messy equations. 6/10.

4.4: Shortest Paths in a Graph

The problem at hand was to find the shortest path from a starting node to a desired node.

We are given a directed graph, where we start at the given start node. We delete this from the directed graph and add it to our explored set, having length of 0. Then, we look at the nodes connected to the start node and choose the one which has the shortest length. We continue to do this until we reach our desired node, upon which we will terminate. This is following Dijkstra's Algorithm.

Because we are always selecting the shortest length and reassigning the shortest length to each node as we go, we will always end up with the shortest path to the desired node. This is the greedy aspect of the algorithm, as it optimizes the path to each node. We also must assume that negative lengths are not allowed, as this would break the algorithm.

Looking at the runtime of the algorithm, we see that it runs n - 1 times in the while loop (reaching every node, excluding the start node) and then goes through each edge, which takes O(m) time. Thus, the algorithm runs in O(mn) time. But, using the right data structures improves greatly upon this, so we are told to use a priority queue for the nodes with the lengths as keys (shorter the length, higher the priority). At first, the keys will all be infinite, but will be changed as we find smaller and smaller lengths to each specific node. When all is said and done, we are left with a runtime of O(m) for the algorithm, as we never do worse than checking each edge.

This algorithm was enlightening, as it reinforced the lesson in class and built upon it. 8/10 for readability and clarity, as well as interesting content.

4.5: The Minimum Spanning Tree Problem

In a minimum spanning tree, we are looking to build a connected graph as cheaply as possible. It also cannot contain any cycles, or else the problem would be using unnecessary moves, as it would go through the same node twice.

Curiously enough, the Minimum Spanning Tree Problem can be found a number of ways; three to be exact. The first is found by beginning with an empty set and inserting the edges with the least cost first, building the tree one node at a time. If adding a certain edge results in a cycle, then that edge is simply discarded to prevent this. This is Kruskal's Algorithm.

The next is called Prim's Algorithm. It is a spin-off of Dijkstra's in the fact that you start with a root node and “greedily” grow the tree out from this starting node, by successively adding the node with the next cheapest cost.

The last method is where we “run a sort of backward version of Kruskal's Algorithm.” Starting with a full graph, we delete the edges from highest to least cost, unless doing so would disconnect the graph. This is usually called Reverse Delete-Algorithm.

Prim and Kruskal can be run in O(mlogn), but Reverse-Delete is not as easy to do in this time. For Prim, this is because, like Dijkstra's, it uses a heap-based priority queue with m edges at most, and then ExtractMin and ChangeKey run in O(logn) time, so that is how we get the desired runtime. Kruskal's and Reverse-Delete runtimes will be discussed in the following sections.

Overall, this chapter was pretty drawn out with large proofs and simple ideas that went on and on, just to be put in more understandable terms in the following paragraph. I give it a 3/10.

4.6: Implementing Kruskal's Algorithm: The Union-Find Data Structure

In order to implement this algorithm, we will create a structure called the Union-Find structure. This allows us to check both sides of an edge as we consider each edge. If they are the same, a path is already included, so they will not be added. But, if they are different, they will be added. This works great as we add edges, because we just have a check with an if statement, but the same cannot be said for deletion, which is not supported. But, we can simply take two arrays and, for each n, use a Find operation to compare the two and then add the union to a final set, which will all take O(klogk) time, as we said in the last section.

We can actually have a better data structure, though, that uses pointers. This is useful, as we just update the pointers from pointing at themselves to pointing at the set they are now unioned with. The problem with this is that the Find operation now takes O(logn).

Whichever one we use, it will be done in O(mlogn) time, it is just a matter of some slight improvements that aren't too noticeable in the long run.

This section was shorter but a more dull read as it was review basically, 5/10.

4.7: Clustering

This problem had to do with examples such as a collection of photos, letters, etc. that we would try to classify into groups, or clusters. To start, we have a graph that we will grow by first connecting the edges with the closest cost. The connection here is that we are just using Kruskal's Algorithm, stopping when we obtain the specified number of connected components. So, we basically do everything in Kruskal's, but delete the “k - 1 most expensive edges.” This allows us to get a cluster with the closest related set.

This section was actually pretty interesting and extremely short, so 8/10.

courses/cs211/winter2018/journals/shermanc/chapter4.1520310100.txt.gz · Last modified: by shermanc
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0