Differences
This shows you the differences between two versions of the page.
| Next revision | Previous revision | ||
| courses:cs211:winter2018:journals:bairdc:chapter4 [2018/02/27 01:00] – created bairdc | courses:cs211:winter2018:journals:bairdc:chapter4 [2018/03/13 03:55] (current) – [4.8 – Huffman Codes and Data Compression] bairdc | ||
|---|---|---|---|
| Line 5: | Line 5: | ||
| ===== 4.1 – Interval Scheduling: The Greedy Algorithm Stays Ahead ===== | ===== 4.1 – Interval Scheduling: The Greedy Algorithm Stays Ahead ===== | ||
| - | The main challenge in designing a good greedy algorithm is deciding which rule to use for the selection process. For interval scheduling, some rules that might seem good are minimal start time and the smallest interval time, but the best rule is to use the request that finishes first. This way, we maximize the time left over to service the other requests. | + | The main challenge in designing a good greedy algorithm is deciding which rule to use for the selection process. For interval scheduling, some rules that might seem good are minimal start time and the smallest interval time, but the best rule is to use the request that finishes first. This way, we maximize the time left over to service the other requests. To prove that our greedy solution, //A//, is optimal, we must show that //A// contains the same number of intervals as an optimal solution, //O//. We then can compare partial solutions, showing that //A// is doing better in a step-by-step fashion. |
| + | |||
| + | Interval Scheduling Algorithm: | ||
| + | < | ||
| + | Sort jobs by finish times so that f1 ≤ f2 ≤ ... ≤ fn | ||
| + | G = {} | ||
| + | for j = 1 to n | ||
| + | if job j compatible with G | ||
| + | G = G ∪ {j} | ||
| + | return G | ||
| + | </ | ||
| + | This algorithm runs in O(nlogn) time because we first sort the request, taking O(nlogn), then we construct an array in O(n) time, and we do one pass through n intervals, spending constant time per interval, which takes O(n) time. So, overall we can say the algorithm is O(nlogn). | ||
| + | |||
| + | A similar problem is the Interval Partitioning Problem in which we must partition all intervals across multiple resources (ex. scheduling classes in classrooms). In any instance of Interval Partitioning, | ||
| + | |||
| + | Interval Partitioning Algorithm: | ||
| + | < | ||
| + | Sort intervals by starting time so that s1 ≤ s2 ≤ ... ≤ sn | ||
| + | d = 0 | ||
| + | for j = 1 to n | ||
| + | if lecture j is compatible with some classroom k | ||
| + | schedule lecture j in classroom k | ||
| + | else | ||
| + | allocate a new classroom d + 1 | ||
| + | schedule lecture j in classroom d + 1 | ||
| + | d = d + 1 | ||
| + | </ | ||
| + | This algorithm runs in O(nlogn) time because for each classroom k, we maintain the finish time of the last job added, and we can keep the classrooms in a priority queue by the last job's finish time. | ||
| + | |||
| + | I'd give this section an 8/10 on both readability and interestingness. | ||
| + | |||
| + | ===== 4.2 – Scheduling to Minimize Lateness: An Exchange Argument ===== | ||
| + | |||
| + | In this problem, we have a single resource and a set of n requests to use this resource for a given time interval. However, instead of a start and finish time, the request has a deadline, and it requires a time interval length, //t//, and it can be scheduled at any point before the deadline. The goal is to minimize the lateness of the request (finish time minus deadline). The correct greedy approach for this problem is to sort the jobs in increasing order of their deadlines, and then schedule them in this order. This way, we make sure that jobs with earlier deadlines get completed earlier. | ||
| + | |||
| + | Minimize Lateness: | ||
| + | < | ||
| + | Sort n jobs by deadline so that d1 ≤ d2 ≤ … ≤ dn t = 0 | ||
| + | for j = 1 to n | ||
| + | Assign job j to interval [t, t + tj] | ||
| + | sj = t | ||
| + | fj = t + tj | ||
| + | t = t + tj | ||
| + | output intervals [sj, fj] | ||
| + | </ | ||
| + | |||
| + | I'd give this section a 10/10 on readability and a 7/10 on interestingness. | ||
| + | |||
| + | ===== 4.4 – Shortest Paths in a Graph ===== | ||
| + | |||
| + | This section examines a greedy algorithm that calculates the shortest paths with a designated starting node, //s//, and the assumption that //s// has a path to every other node in the graph. Such an algorithm was created by Edsger Dijkstra in 1959, and the algorithm became known as Dijkstra' | ||
| + | |||
| + | Dijkstra' | ||
| + | < | ||
| + | Dijkstra' | ||
| + | Let S be the set of explored nodes | ||
| + | For each u in S, we store a distance d(u) | ||
| + | Initially S = {s} and d(s) = 0 | ||
| + | While S != V | ||
| + | Select a node v in S with at least one edge from S for which | ||
| + | d'(v) = min(e=(u, | ||
| + | Add v to S and define d(v) = d' | ||
| + | EndWhile | ||
| + | </ | ||
| + | |||
| + | Dijkstra' | ||
| + | |||
| + | This section was very readable, and I'd give it a 8/10 on readability and a 7/10 on interestingness. | ||
| + | |||
| + | ===== 4.5 – The Minimum Spanning Tree Problem ===== | ||
| + | |||
| + | The goal of the minimum spanning tree problem is to find a tree from a graph that is fully connected, but with the minimum cost. This can be done with 3 different greedy algorithms. One, called Kruskal' | ||
| + | |||
| + | Overall this section was very readable and pretty interesting. 9/10 on readability and 8/10 on interestingness. | ||
| + | |||
| + | ===== 4.6 – Implementing Kruskal' | ||
| + | |||
| + | A Union-Find data structure is a special one created for Kruskal' | ||
| + | |||
| + | Overall this section was pretty readable and interesting. 7/10 for both. | ||
| + | |||
| + | ===== 4.7 – Clustering ===== | ||
| + | |||
| + | Clustering happens when you're trying to classify a collection of objects into distinct groups. One way to divide them up into clusters is to calculate the distances between the distinct groups. However, there are infinitely many clusterings in a set; the issue is finding the ones with the max spacing. This procedure is the same as Kruskal' | ||
| + | |||
| + | Overall this section was very interesting and very readable to me: 10/10. | ||
| + | |||
| + | ===== 4.8 – Huffman Codes and Data Compression ===== | ||
| + | |||
| + | The goal of Huffman Codes is lossless data compression. The original top-down approach created by Shannon and Fano isn't guaranteed to always produce an optimal code. So, Huffman came up with a bottom-up approach that did. The idea in this is to have the most frequently used characters to have the shortest codes in order to optimize the overall compression. Furthermore, | ||
| + | |||
| + | Using a Priority Queue, we can get Huffman' | ||
| + | < | ||
| + | if S has two letters: | ||
| + | Encode one letter as 0 and the other letter as 1 | ||
| + | else: | ||
| + | Let y* and z* be the two lowest-frequency letters | ||
| + | Form a new alphabet S’ by deleted y* and z* and replacing | ||
| + | them with a new letter w of freq fy* + fz* Recursively construct a prefix code y’ for S’ with tree T’ | ||
| + | Define a prefix code for S as follows: | ||
| + | Start with T’ | ||
| + | Take the leaf labeled w and add two children below it | ||
| + | labeled y* and z* | ||
| + | </ | ||
| + | |||
| + | I'd give this section a 10/10 on readability and interestingness. I was very interested in the topic, and it was explained well. | ||
