Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2018:journals:boyese:chapter4 [2018/02/26 21:44] – [Section 4.1: Interval Scheduling: The Greedy Algorithm Stays Ahead] boyesecourses:cs211:winter2018:journals:boyese:chapter4 [2018/03/12 18:25] (current) – [Section 4.8: Huffman Codes and Data Compression] boyese
Line 39: Line 39:
 We can make our algorithm run in time O(n log n) as follows: We can make our algorithm run in time O(n log n) as follows:
   * Begin by sorting the n requests in order of finishing time and labeling them in this order; that is, we will assume that f(i) < f(j) when i < j. This takes time O(n log n).   * Begin by sorting the n requests in order of finishing time and labeling them in this order; that is, we will assume that f(i) < f(j) when i < j. This takes time O(n log n).
-  * In an additional O(n) time, we construct an array S[1... n] with the property that S[i] contains the value s(i). We now select requests by processing the intervals in order of increasing f(i). We always select the first interval; we then iterate through the intervals in +  * In an additional O(n) time, we construct an array S[1... n] with the property that S[i] contains the value s(i). We now select requests by processing the intervals in order of increasing f(i). We always select the first interval; we then iterate through the intervals in order until reaching the first interval j for which s(j) > f(1); we then select this one as well. 
-order until reaching the first interval j for which s(j) > f(1); we then select this one as well. +
   * So we implement the greedy algorithm analyzed above in one pass through the intervals, spending constant time per interval. Thus this part of the algorithm takes time O(n).   * So we implement the greedy algorithm analyzed above in one pass through the intervals, spending constant time per interval. Thus this part of the algorithm takes time O(n).
  
Line 51: Line 50:
 ===A Related Problem: Scheduling All Intervals=== ===A Related Problem: Scheduling All Intervals===
 **//The Problem//** **//The Problem//**
 +A related problem arises if we have many identical resources available and we wish to schedule all the requests’ using as few resources as possible. This is called the //Interval Partitioning Problem// because the goal here is to partition all intervals across multiple resources. We will now design a simple greedy algorithm that schedules all intervals using a number of resources equal to the depth. This immediately implies the optimality of the algorithm.
  
 **//Designing the Algorithm//** **//Designing the Algorithm//**
- +Let d be the depth of the set of intervals. We show how to assign a label to each interval, where the labels come from the set of numbers {1, 2 ..... d}, and the assignment has the property that overlapping intervals are labeled with different numbers. This gives the desired solution, since we can interpret each number as the name of a resource, and the label of each interval as the name of the resource to which it is assigned. The algorithm we use for this (shown below) is a simple one-pass greedy strategy that orders intervals by their starting time.
-**//Analyzing the Algorithm//**+
  
 <code> <code>
Line 70: Line 69:
 Endfor Endfor
 </code> </code>
 +
 +**//Analyzing the Algorithm//**
 +If we use the greedy algorithm above, every interval will be assigned a label, and no two overlapping intervals will receive the same label. If you have d labels at your disposal, then as you sweep through the intervals from left to right, assigning an available label to each interval you encounter, you can never reach a point where all the labels are currently in use. Since our algorithm is using d labels, we conclude that it is, in fact, always using the minimum possible number of labels.
  
 **Theorem 4.6** //The greedy algorithm above schedules every interval on a resource, using a number of resources equal to the depth of the set of intervals. This is the optimal number of resources needed.// **Theorem 4.6** //The greedy algorithm above schedules every interval on a resource, using a number of resources equal to the depth of the set of intervals. This is the optimal number of resources needed.//
 +
 +I found that there was too much information in this section and it would have been easier to read if the authors would do less explaining and more listing of steps. The excess information made the section hard to read and there doesn't need to be so much explaining for the introduction to a new subject. I would give this section a 5/10 for readability. I think maybe another simple example or two of where a greedy algorithm would be the best solution could have been helpful.
  
 ====Section 4.2: Scheduling to Minimize Lateness: An Exchange Argument==== ====Section 4.2: Scheduling to Minimize Lateness: An Exchange Argument====
  
 ===The Problem=== ===The Problem===
 +Consider again a situation in which we have a single resource and a set of n requests to use the resource for an interval of time. Assume that the resource is available starting at time s. In contrast to the previous problem, each request is now more flexible. Instead of a start time and finish time, the request i has a deadline d<sub>i</sub>, and it requires a contiguous time interval of length t<sub>i</sub>, but it is willing to be scheduled at any time before the deadline.  Suppose that we plan to satisfy each request, but we are allowed to let certain requests run late. Thus, beginning at our overall start time s, we will assign each request i an interval of time of length t<sub>i</sub>; let us denote this interval by [s(i), f(i)], with f(i) = s(i) + t<sub>i</sub>. Unlike the previous problem, then, the algorithm must actually determine a start time (and hence a finish time) for each interval.
  
 ===Designing the Algorithm=== ===Designing the Algorithm===
 +There are several natural greedy approaches in which we look at the data (t<sub>i</sub>, d<sub>i</sub>) about the jobs and use this to order them according to some simple rule:
  
 +1. Schedule the jobs in order of increasing length t<sub>i</sub>, so as to get the short jobs out of the way quickly **-->** Ignores the deadlines of the jobs so does not work
 +
 +2. A more natural greedy algorithm would be to sort jobs in order of increasing slack d<sub>i</sub> - t<sub>i</sub> **-->** This greedy rule fails as well
 +
 +3. Sort the jobs in increasing order of their deadlines d<sub>i</sub>, and schedule them in this order. This rule is called Earliest Deadline First and there is an intuitive basis to it: we should make sure that jobs with earlier deadlines get completed earlier. **-->** This algorithm always produces optimal solutions. It is shown below.
 +
 +<code>
 +Order the jobs in order of their deadlines
 +Assume for simplicity of notation that d<sub>1</sub> ≤ ... ≤ d<sub>n</sub>
 +Initially, f = s
 +Consider the jobs i = l, ..., n in this order
 +    Assign job i to the time interval from s(i) = f to f(i) = f + t<sub>i</sub>
 +    Let f = f + t<sub>i</sub>
 +End
 +Return the set of scheduled intervals [s(i), f(i)] for i = 1, ..., n
 +</code>
  
 **Theorem 4.7** //There is an optimal schedule with no idle time.// **Theorem 4.7** //There is an optimal schedule with no idle time.//
 +
 +Observe that the schedule the algorithm produces has no "gaps"--times when the machine is not working yet there are jobs left. The plan here is to gradually modify O, preserving its optimality at each step, but eventually transform it into a schedule that is identical to the schedule A found by the greedy algorithm. We refer to this type of analysis as an //exchange argument//. The main step in showing the optimality of our algorithm is to establish that there is an optimal schedule that has no inversions and no idle time. 
  
 **Theorem 4.8** //All schedules with no inversions and no idle time have the same maximum lateness.// **Theorem 4.8** //All schedules with no inversions and no idle time have the same maximum lateness.//
Line 90: Line 113:
  
 **//Extensions//** **//Extensions//**
 +A more difficult version of this problem would contain requests i that, in addition to the deadline d<sub>i</sub> and the requested time t<sub>i</sub>, would also have an earliest possible starting time r<sub>i</sub>, called the release time. Problems with release times arise naturally in scheduling problems where requests can take the form: Can I reserve the room for a two-hour lecture, sometime between 1 P.M. and 5 P.M.? 
  
 +It is still somewhat unclear to me what the difference, advantages, and disadvantages this method has over the greedy stays ahead method described in section 4.1. I think this section was difficult to digest, at a 5/10, so I will probably have to review it alongside section 4.1 again, as wells as my notes. I think it would have been helpful for me to read these sections before the lectures in this particular case because I was also pretty lost during the lectures and I think I could have gotten some further clarification after reading the sections. 
 ====Section 4.3: Optimal Caching: A More Complex Exchange Argument==== ====Section 4.3: Optimal Caching: A More Complex Exchange Argument====
  
Line 98: Line 122:
  
 ===The Problem=== ===The Problem===
 +Given a directed graph G = (V, E) with a designated start node s determine the shortest path from s to every other node in the graph. Assume that s has a path to every other node in G. Each edge e has a length //l//<sub>e</sub> ≥ 0, indicating the time (or distance, or cost) it takes to traverse e. For a path P, the length of P--denoted //l//(P)--is the sum of the lengths of all edges in P.
  
  
 ===Designing the Algorithm=== ===Designing the Algorithm===
 +The outline for Dijkstra's Algorithm is shown below:
  
 <code> <code>
Line 110: Line 135:
 While S ≠ V While S ≠ V
     Select a node v ∉ S with at least one edge from S for which     Select a node v ∉ S with at least one edge from S for which
-        d’(v) = min <sub>e = (u,v):u∈s</sub> d(u) + //l//<sub>e</sub> is as small as possible+        d’(v) = min<sub>e = (u,v):u∈s</sub> d(u) + //l//<sub>e</sub> is as small as possible
     Add v to S and define d(v) = d’(v)     Add v to S and define d(v) = d’(v)
 EndWhile EndWhile
Line 116: Line 141:
  
 ===Analyzing the Algorithm=== ===Analyzing the Algorithm===
 +Is it always true that when Dijkstra’s Algorithm adds a node v, we get the true shortest-path distance to v? We now answer this by proving the correctness of the algorithm, showing that the paths P<sub>a</sub> really are shortest paths. Dijkstra’s Algorithm is greedy in the sense that we always form the shortest new s-v path we can make from a path in S followed by a single edge. 
  
 **Theorem 4.14** //Consider the set S at any point in the algorithm’s execution. For each u ∈ S, the path P<sub>u</sub> is a shortest s-u path.// **Theorem 4.14** //Consider the set S at any point in the algorithm’s execution. For each u ∈ S, the path P<sub>u</sub> is a shortest s-u path.//
 +
 +Note that this fact immediately establishes the correctness of Dijkstra’s Algorithm, since we can apply it when the algorithm terminates, at which point S includes all nodes.
 +
 +Dijkstra's Algorithm does not always find shortest paths if some of the edges can have negative lengths. Many shortest-path applications involve negative edge lengths, and a more complex algorithm is required for this case. 
 +Another observation is that Dijkstra’s Algorithm is, in a sense, even simpler than it has been described here. Dijkstra’s Algorithm is really just a "continuous" version of the standard breadth-first search algorithm for traversing a graph.
  
 **//Implementation and Running Time//** **//Implementation and Running Time//**
 +There are n - 1 iterations of the while loop for a graph with n nodes, as each iteration adds a new node v to S. Selecting the correct node v efficiently is a more subtle issue.  For a graph with m edges, computing the minima can take O(m) time, so this would lead to an implementation that runs in O(mn) time, but we can do considerably better if we use the right data structures. res. First, we will explicitly maintain the values of the minima d’(v) = min<sub>e = (u,v):u∈s</sub> d(u) + //l//<sub>e</sub> for each node v ∈ V - S, rather than recomputing them in each iteration.
 +We can further improve the efficiency by keeping the nodes V - S in a priority queue with d’(u) as their key. Recall that a priority queue can efficiently insert elements, delete elements, change an element’s key, and extract the element with the minimum key. 
  
 **Theorem 4.15** //Using a priority queue, Dijkstra’s Algorithm can be implemented on a graph with n nodes and m edges to run in O(m) time, plus the time for n ExtractMin and m ChangeKey operations.// **Theorem 4.15** //Using a priority queue, Dijkstra’s Algorithm can be implemented on a graph with n nodes and m edges to run in O(m) time, plus the time for n ExtractMin and m ChangeKey operations.//
 +
 +Using the heap-based priority queue implementation (discussed in Chapter 2), each priority queue operation can be made to run in O(log n) time. Thus the overall time for the implementation is O(m log n).
 +
 +The biggest question I have after reading this section is how is Dijkstra's Algorithm better than a BFS, and why would someone choose this algorithm as opposed to a BFS? I had a hard time digesting this section too so I would give it a 6/10 for readability.
 ====Section 4.5: The Minimum Spanning Tree Problem==== ====Section 4.5: The Minimum Spanning Tree Problem====
  
 +===Minimum Spanning Tree (MST)===
 +  * Spanning Tree: spans all nodes in a graph
 +  * Given a connected graph G = (V, E) with positive edge weights ce, an MST is a subset of the edges T ⊆ E such that T is a spanning tree whose sum of edge weights is minimized
 +
 +**MST Applications**
 +  * Network design
 +      * Telephone, electrical, hydraulic, TV cable, computer, road 
 +  * Approximation algorithms for NP-hard problems
 +      * Traveling salesperson problem, Striner tree
 +  * Indirect applications
 +      * Max bottleneck paths
 +      * Reducing data storage in sequencing amino acids in a protein
 +  * Cluster analysis
 +
 +//Cayley’s Theorem:// There are n<sup>n-2</sup> spanning trees
 +
 +**Possible algorithms to find MST:**
 +  * //Kruskal’s Algorithm:// Start without any edges and builds a spanning tree by successively inserting edges from E in order of increasing cost. Insert each edge e as long as it does not create a cycle when added to the edges already inserted. If inserting e would result in a cycle, discard e and continue.
 +  * //Prim’s Algorithm:// Start with a root node s and try to greedily grow a tree from s outward. At each step, we simply add the node that can be attached as cheaply as possibly to the partial tree we already have.
 +  * //Reverse Delete Algorithm:// Start with the full graph (V, E) and begin deleting edges in order of decreasing cost. As we get to each edge e (starting from the most expensive), we delete it as long as doing so would not actually disconnect the graph we currently have.
 +
 +**Properties of the MST**
 +  * //Cut property:// Let S be any subset of nodes, and let e be the min cost edge with exactly one endpoint in S. Then MST contains e.
 +      * //Cutset:// A cut is a subset of nodes S. The corresponding cutset D is the subset of edges with exactly one endpoint in S.
 +  * //Cycle property:// Let C be a cycle, and let f be the max cost edge belonging to C. Then MST does not contain f. (All edge costs ce are distinct)
 +      * //Cycle:// Set of edges in the form a-b, b-c, c-d, …, y-z, z-a
 +  * //Cycle-Cut Intersection:// A cycle and a cutset intersect in an even number of edges
  
 ====Section 4.6: Implementing Kruskal's Algorithm: The Union-Find Data Structure==== ====Section 4.6: Implementing Kruskal's Algorithm: The Union-Find Data Structure====
  
 +===Union-Find Data Structure:===
 +  * Keeps track of a graph as edges are added
 +      * Cannot handle when edges are deleted
 +  * Maintains disjoint sets
 +      * E.g., graph’s connected components
  
 +**Operations:**
 +  * Find(u): returns name of set containing u
 +      * How is this utilized to see if two nodes are in the same set?
 +      * Goal implementation: O(log n)
 +  * Union(A, B): merge sets A and B into one set
 +      * Goal implementation: O(log n)
 +
 +===Implementing Kruskal’s Algorithm===
 +  * Using the union-find data structure
 +      * Build set of T edges in the MST
 +      * Maintain set for each connected component
 +  * Using best implementation of union-find
 +      * Sorting: O(m log n)
 +      * Union-find: O(m α (m, n))
 +      * O(m log n)
 +
 +**The Algorithm:**
 +<code>
 +Sort edge weights so that c<sub>1</sub> ≤ c<sub>2</sub> ≤ ... ≤ c<sub>m</sub>
 +T = {}
 +foreach (u ∈ V) make a set containing singleton u
 +for i = 1 to m
 +    (u,v) = e<sub>i</sub>
 +    if (u and v are in different sets)
 +        T = T ∪ {e<sub>i</sub>}
 +        merge the sets containing u and v
 +return T
 +</code>
 ====Section 4.7: Clustering==== ====Section 4.7: Clustering====
  
 +===Clustering===
 +Given a set U of n objects (or points) labeled p<sub>1</sub>, …, p<sub>n</sub>, classify into coherent groups
 +  * **Problem:** divide objects into clusters so that points in different clusters are far apart
 +      * Requires quantification of distance
 +  * Applications
 +      * Routing in mobile ad-hoc networks
 +      * Identify patterns in gene expression
 +      * Identifying patterns in web application use cases
 +          * Sets of URLs
 +      * Similarity searching in medical image databases
 +
 +**Distance Function**
 +The numeric value specifying “closeness” of two objects
 +Assume distance function satisfies several natural properties:
 +  * d(p<sub>i</sub>, p<sub>j</sub>) = 0 iff p<sub>i</sub> = p<sub>j</sub> (identity of indiscernibles) 
 +  * d(p<sub>i</sub>, p<sub>j</sub>) ≥ 0 (non-negativity) 
 +  * d(p<sub>i</sub>, p<sub>j</sub> ) = d(p<sub>j</sub>, p<sub>i</sub> ) (symmetry) 
 +
 +
 +**Problem: k-Clustering of Maximum Spacing**
 +  * //k-clustering:// Divide objects into k non-empty groups 
 +  * //Spacing:// Min distance between any pair of points in different clusters 
 +  * //k-clustering of maximum spacing:// Given an integer k, find a k-clustering of maximum spacing
 +
 +**Greedy Clustering Algorithm**
 +  * Single-link k-clustering algorithm
 +      * Form a graph on the vertex set U, corresponding to n clusters 
 +      * Find the closest pair of objects such that each object is in a different cluster and add an edge between them 
 +      * Repeat n-k times until there are exactly k clusters
 +  * This is the same as Kruskal’s algorithm, except we stop when there are k connected components
 +  * Equivalent to finding MST and deleting the k-1 most expensive edges
 +
 +**//Theorem://** Let C denote the clustering C1, …, Ck formed by deleting the k-1 most expensive edges of a MST. C is a k-clustering of maximum spacing.
  
 ====Section 4.8: Huffman Codes and Data Compression==== ====Section 4.8: Huffman Codes and Data Compression====
  
 +===Huffman Codes===
 +**Problem: Encoding**
 +  * Computers use bits: 0s and 1s
 +  * Need to represent what humans know to what computers know
 +  * Map symbol → unique sequence of 0s and 1s
 +  * Process is called encoding
 +  * The least number of bits needed to encode 26 letters, space, and 5 punctuation marks (, . ? ! ‘) (32 characters = 25), so 5 bits
 +  * For a longer string of characters, we can use shorter encodings for frequently used characters like ETAOINSHRDLU
 +      * Goal: optimal encoding that takes advantage of non-uniformity of letter frequencies 
 +      * Looking to represent data as compactly as possible
 +      * Example: Morse code, which is an example of variable-length encoding
 +      * We run into a problem when doing this: ambiguity caused by the encoding of one character being a prefix of encoding another
  
 +**Optimal Prefix Codes**
 +  * Solution to ambiguity: Prefix codes, which are a map of letters to bit strings such that no encoding is a prefix of any other
 +      * Goal: minimize the average number of bits per letter (ABL):
 +      * Σ<sub>x∈S</sub> : frequency of x * length of encoding x
 +      * fx : frequency that the letter x occurs
 +      * |ϒ(x)|: length of encoding of x
 +      * Minimize ABL =  Σ<sub>x∈S</sub> f<sub>x</sub>|ϒ(x)|
 +  * Binary tree to represent prefix codes
 +      * Exposes structure netter than list of mappings
 +          * Each leaf node is a letter
 +          * Follow path to the letter
 +              * Going left: 0, Going right: 1
 +  * Recursively generate tree
 +      * All letters are in root node
 +      * For all letters in node:
 +          * If encoding begins with 0, letter belongs in left subtree 
 +          * Otherwise (encoding begins with 1), letter belongs in right subtree 
 +          * If last bit of encoding, make the letter a leaf node of that subtree
 +          * Shift encoding one bit 
 +          * Process left and right children 
 +  * Tree Properties
 +      * The length of the path from root to leaf is the depth
 +      * The binary tree T corresponding to the optimal prefix code is full, i.e., each internal node has two children
 +  * Conclusions
 +      * The binary tree corresponding to the optimal prefix code is full, i.e., each internal node has two children
 +      * We want to label the leaf nodes of the binary tree corresponding to the optimal prefix code such that nodes with greatest depth have least frequency
 +      * The two letters with least frequency are definitely going to be siblings 
 +
 +**Huffman’s Algorithm**
 +  * Data structures needed:
 +      * Binary tree for the prefix codes
 +      * Priority queue for choosing the node with lowest frequency 
 +  * Costs
 +      * Inserting and extracting node into PQ: O(log n) 
 +      * Number of insertions and extractions: O(n)
 +      * O(n log n)
 +
 +<code>
 +To construct a prefix code for an alphabet S, with given frequencies:
 +    If S has two letters then
 +        Encode one letter using 0 and the other letter using 1
 +    Else
 +        Let y* and z* be the two lowest-frequency letters
 +        Form a new alphabet S’ by deleting y* and z* and replacing them with a new letter ω of frequency f<sub>y*</sub> + f<sub>z*</sub>
 +        Recursively construct a prefix code γ’ for S’, with tree T’
 +        Define a prefix code for S as follows:
 +            Start with T’
 +        Take the leaf labeled ω and add two children below it labeled y* and z*
 +    Endif
 +</code>
 ====Section 4.9: Minimum-Cost Arborescences: A Multi-Phase Greedy Algorithm==== ====Section 4.9: Minimum-Cost Arborescences: A Multi-Phase Greedy Algorithm====
courses/cs211/winter2018/journals/boyese/chapter4.1519681495.txt.gz · Last modified: by boyese
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0