Section 2.1 - Computational Tractability

To begin with, we will focus on analyzing the worst-case running time: we will look for a bound on the largest possible running time the algorithm could have over all inputs of a given size N and see how this scales with N. (We don't use average-case analysis for a few reasons. See pg 31.)

We eventually arrive at a definition of efficiency: An algorithm is efficient if it has a polynomial running time.

The growth rates of polynomial functions and exponential functions is hugely different as well. (See table on page 34.)One of the benefits to making that our definition is that it becomes negatable and easier to express that there is no efficient algorithm for a particular problem.

I became more confused about this topic after reading this section and only found it a 7 on an interest scale. I think that it was more clear in class when we could work things out on the board and concepts were explained in easier-to-understand (more conversational?) terms.

Section 2.2 - Asymptotic Order of Growth

Our discussion of computational tractability has turned out to be intrinsically based on our ability to express the notion that an algorithm's worst-case running time on inputs of size n grows at a rate that is at most proportional to some function f(n). The function f(n) then becomes a bound on the running time of the algorithm.

When we provide a bound on the running time of an algorithm, we will generally be counting the number of such pseudo-code steps that are executed; in this context, one step will consist of assigning a value to a variable, looking up an entry in an array, following a pointer, or performing an arithmetic operation on a fixed-size integer.

We actually don't want to find super concrete statements about the running time of an algorithm on inputs of size n, because 1) it would be a lot of work 2)that makes it harder to do comparisons of for similar behavior in algorithms and 3) the detailed statements are just kind of meaningless.

Asymptotic Upper Bounds Let T(n) be a function (worst-case running time of a certain algorithm on an input of size n). Given another function f(n), we say that T(n) = O(f(n)). More precisely, T(n) is O(f(n)) if there exist constants c > 0 and n0 >=0 so that for all n >= n0, we have T(n) ⇐ c * f(n). In this case, we will say that T is asymptotically uppperbounded by f.

Asymptotic Lower Bounds We want to show that for arbitrarily large input sizes n, T(n) is at least a constant multiple of some specific function f(n). So, we say that T(n)/ is Ω(f(n)) if there exist constants ∈ > 0 and n0>=0 so that for all n>=n0, we have T(n) >= ∈ * f(n). We will refer to T in this case as being asymptotically lowerbounded by f.

Asymptotic Tight Bounds If we can show that a running time T(n) is both O(f(n)) and Ω(f(n)), then we've found the “right bound” and we can say that T(n) is Θ(f(n)). Asymptotically tight bounds on worst-case running times are nice things to find, since they characterize the worst-case performance of an algorithm precisely up to constant factors. (Unsure why this formatting is occurring.)

Properties of Asymptotic Growth Rates

A property is transitivity: if a function f is asymptotically upper-bounded by a function g, and if g is asymptotically upper-bounded by a function h then f is asymptotically upperbounded by h. A similar property holds for lower bounds.

It is also useful to have results that quantify the effect of adding two functions. If we have an asymptotic upper bound that appleis to each of two functions f and g, then it applies to their sum.

Asymptotic Bounds for Some Common Functions

Polynomials The asymptotic rate of growth for polynomials is determined by their “high-order term” - the one that determines the degree.

  • Let f be a polynomial of degree d, in which the coefficient a_d is positive. Then f = O(n^d).

A polynomial-time algorithm is one whose running time T(n) is O(n^d) for some constant d where d is independent of the input size.

Logarithms For every base b, the function log_b_n is asymptotically bounded by every function of the form n^x, even for values of x arbitrary close to 0. The base is not important when writing bounds using asymptotic notation.

Exponentials For every r > 1 and every d >0, we have n^d = O(r^n). Asymptotically speaking, exponential functions are all different. Still, it's usually clear that people mean that the running time grows at least as fast as some exponential function.

After reading this section of the textbook, I had to go look at the slides from class to get it all straight but when combined with the notes from class, I would give this a 8.

Section 2.3 -

(Study note: Just look at slides for arrays and lists.)

We need to consider each step of the algorithm and understand what data structure allows us to implement it efficiently. Essentially, we need to be able to each of four things in constant time.

  1. We need to be able to identify a free man.
  2. We need, for a man m to be able to identify the highest-ranked woman to whom he has not yet proposed.
  3. For a woman w, we need to decide if w is currently engaged, and if she is, we need to identify her current partner.
  4. For a woman w and two men m and m', we need to be able to decide, again in constant time, which of m or m' is preferred by w.

First, we will select a free man from the set of free men represented by a linked list. We take the first man off of the list and delete him from the list if he gets engaged. If another man m' becomes free, we can insert him at the front of the list. This is all constant time. Then, consider a man m. We need to find the highest-ranked woman on his pref list to whom he has not already proposed to. We can use an array to represent the woman he will propose to in order. Starting at the first position, we will increase the index each time he proposes to a woman, whether or not she accepts his proposal. Next, we need to have an array that keeps track of a woman w's current partner, m'. If a woman is not engaged, we use a special null symbol. These first three steps can all be done in O(1). The trickiest part is maintaining women's preferences to keep the fourth step efficient. Consider a step where man m proposes to a woman w. Assume w is already engaged and her current partner is m'. We need to decide whether w prefers m or m'. We can do this by creating a n x n array that contains the rank of man m in the sorted order of w's preferences. WE can create this array in linear time, for a total initial time investment proportional to n^2. Then we simply have to compare the values. This allows us to do the 4th step in constant time.

All of the above data structures allow us to implement the GS algorithm in O(n^2).

I thought this section was interesting (9) and made a lot of sense when combined with lecture.

Section 2.4 - A Survey of Common Running Times

Linear Time An algorithm that runs in O(n), or linear, time has a very natural property: its running time is at most a constant factor times the size of the input. One basic way to get an algorithm with this running time is to process the input in a single pass, spending a constant amount of time on each item of input encountered. Two example linear-time algorithms are computing the maximum and merging two sorted lists.

Cubic Time Elaborate sets of nested loops often lead to algorithms that run in O(n^3) time. Example, seeing if two sets have an element in common (see page 52).

O(n^k) We obtain a running time of O(n^k) for any constant k when we search over all subsets of size k. Example, finding independent sets in a graph (page 53). (It is believed that no algorithm to find k-node independent sets in arbitrary graphs can avoid having some dependence on k in the exponent.

Beyond Polynomial Time Two kinds of bounds that come up very frequently are 2^n and n!. For example, we get 2^n as a running time for a search algorithm that must consider all subsets. The function n! grows even more rapidly than 2^n. Search spaces of size n! tend to arise for one of two reasons - first, when n! is the number of ways to match up n items with n other items and second, when n! is all the ways to arrange n items in order.

Sublinear Time There are cases where one encounters running times that are asymptotically smaller than linear. Since it takes linear time just to read the input, these situations tend to arise in a model of computation where the input can be “queried' indirectly rather than read completely and the goal is to minimize the amount of querying that must be done. Example, binary search algorithm → running time of O(log n).

I give this section a 8.5.

Section 2.5 - A More Complex Data Structure: Priority Queues

The Problem For the Stable Matching algorithm, we want to be able to easily add and delete elements from a set S and to be able to select an element when the algorithm calls for it. A priority queue is designed for applications in which elements have a priority value, or key, and each time we need to select an element from S, we want to take the one with highest priority.

A priority queue is a data structure tha maintains a set of elements S, where each element v∈S has an associated value key(v) that denotes the priority of element v; smaller keys represent higher priorities. Priority queues support the addition and deletion of elements from the set, and also the selection of the element with smallest key. A motivating application for priority queues and one that is useful to keep in mind when considering their general function is the problem of managing real-time events. Each process has a priority or urgency but processes do not arrive in order of their priority. This allows us to both select the element with minimum key while inserting new processes as they arrive.

We will show how to implement a pq containing at most n elements at any time so that elements can be added and delted and the element with minimum key selected, in O(log n) time per operation.

- A sequence of O(n) pq operations can be used to sort a set of n numbers. → With a pq that can perform insertion and the extraction of minima in O(log n) per operation, we can sort n numbers in O(nlog n).

A Data Structure for Implementing a PQ We will use a data structure called a heap to implement a pq. The heap data structure combines the benefits of a sorted array and list for purposes of this application. Conceptually, we think of a heap as a balanced binary tree, with a root and where each node can have up to two children, a left and a right child. The keys in such a binary tree are said to be in heap order if the key of any element is at least as large as the key of the element at its parent node in the tree. In other words,

     Heap order: For every element v, at a node i, the element w at i's parent satisfies key(w) ≤ key(v).

Before we discuss how to work with a heap, we need to consider what data structure should be used to represent it. We can use pointers: each node at the heap could keep the element it stores, its key and three pointers pointing to the two children and the parent of the heap node. We can avoid using pointers, however, if a bound N is known in advance on the total number of elements that will ever be in the heap at any one time. Such heaps can be maintained in an array H indexed by i = 1,…,N. We will think of the heap nodes as corresponding to the positions in this array. H[1] is the root, and for any node at position i, the children are the nodes at positions leftChild(i) = 2i and rightChild(i) = 2i + 1. The parent of a node at position i is at position parent(i) = [i/2]. If the heap has n < N elements at some time, we will use the first n positions of the array to store the n heap elements and use length(H) to denote the number of elements in H. This representation keeps the heap balanced at all times.

Implementing the Heap Operations The heap element with smallest key is at the root, so it takes O(1) time to identify the minimal element.

Consider adding a new heap element v and assume that our heap H has n < N elements so far. We can add the new element v to the final position i = n + 1 by setting H[i] = v. Unfortunately, this does not maintain the heat property because the key of element v may be smaller than the key of its parent. We will use the procedure Heapify-Up to fix our heap. We will call the process recursively until the heap is fixed.

   Heapify-up(H, i):
      If i > 1 then
          let j = parent(i) = (i/2)
          If key[H[i]] < key[H[j]] then
               swap the array entries H[i] and H[j]
               Heapify-up(H, j)
          Endif
      Endif

To see why ^that works and eventually fixes the heap, let's look more fully at the not-fixed heap. Assume that H is an array and v is the element in position i. We say that “H is almost a heap with the key of H[i] too small” if there is a value α ≥ key(v) such that raising the value of key(v) to α would make the resulting array satisfy the heap property. -Heapify-up(H, i) fixes the heap property in O(log i) time. Using heapify we can insert a new element in a heap of n elements in O(log n) time.

Consider deleting an element. After deleting an element at position i, there will be a “hole” at position i. So, as a patch, we move element w to position i. However, this probably isn't where w goes and violates the heap property…So, we run Heapify-down.

   Heapify-down(H, i):
       Let n = length(H)
       If 2i > n then
           Terminate with H unchanged
       Else if 2i < n then
           Let left = 2i and right = 2i + 1
           Let j be the index that minimizes key[H[left]] and key[H[right]]
       Else if 2i = n then
           Let j = 2i
       Endif
       If key[H[j]] < key[H[i]] then
           swap the array entries H[i] and H[j]
           Heapify-down(H,j)
       Endif

Assume that H is an array and w is the element in postion i. We say that “H is almost a heap with the key of H[i] too big”, if there is a value α ≤ key(w) such that lowering the value of key(w) to α would make the resulting array satisfy the heap property. - The procedure Heapifydown(H,i) fixes the heap property in O(log n) time, assuming that H is almost a heap with the key value of H[i] too big. Using Heapify-up or Heapify-down, we can delete a new element in a heap of n elements in O(log n) time.

Implementing PQs with Heaps The heap data structure with the Heapify-down and Heapify-up operations can efficiently implement a pq that is constrained to hold at most N elements at any point in time. Here is a summary of operations we will use:

  • StartHeap(N) returns an empty heap H that is set up to store at most N elements. This operation takes O(N) time, as it involves initializing the array that will hold the heap.
  • Insert(H, v) inserts the item v into heap H. If the heap currently has n elements, this takes O(log n) time.
  • FindMin(H) identifies the minimum element in the heap H but does not remove it. This takes O(1) time.
  • Delete(H, i) deletes teh element in heap position i. This is implemented in O(log n) time for heaps that have n elements.
  • ExtractMin(H) identifies and deletes an element with minimum key value from a heap. This is a combination of the preceding two operations and so it takes O(log n) time.

I thought this section was really clear and helped my understanding a lot. On a readibility/interesting scale, I'd give it a 10.

courses/cs211/winter2014/journals/deirdre/chapter2.txt · Last modified: 2014/01/22 04:03 by tobind
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0