Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2018:journals:boyese:chapter2 [2018/02/05 20:28] – [Section 2.5 : A More Complex Data Structure: Priority Queues] boyesecourses:cs211:winter2018:journals:boyese:chapter2 [2018/02/05 20:31] (current) – [Section 2.2 : Asymptotic Order of Growth] boyese
Line 12: Line 12:
 In the previous section, we defined efficiency based on an algorithm's worst-case running time on inputs of size n which grows at a rate that is at most proportional to some function f(n). Thus, f(n) is a bound on the running time of the algorithm. In this section, we seek to express the growth rate of running times and other functions in a way that is insensitive to constant factors and low-order terms.\\  In the previous section, we defined efficiency based on an algorithm's worst-case running time on inputs of size n which grows at a rate that is at most proportional to some function f(n). Thus, f(n) is a bound on the running time of the algorithm. In this section, we seek to express the growth rate of running times and other functions in a way that is insensitive to constant factors and low-order terms.\\ 
 \\  \\ 
-**Asymptotic Upper Bounds**\\ +===Asymptotic Upper Bounds=== 
 Let T(n) be a function. Given some other function f(n), we say that T(n) is O(f(n)) if for sufficiently large n, the function T(n) is bounded above by a constant multiple of f(n), or T(n) = O(f(n)). Specifically, T(n) is O(f(n)) if there exist constants c > 0 and n<sub>0</sub> ≥ 0 so that for all n ≥ n<sub>0</sub>, we have T(n) ≤ c⋅f(n). In this case, we will say that T is asymptotically upper-bounded by f.\\  Let T(n) be a function. Given some other function f(n), we say that T(n) is O(f(n)) if for sufficiently large n, the function T(n) is bounded above by a constant multiple of f(n), or T(n) = O(f(n)). Specifically, T(n) is O(f(n)) if there exist constants c > 0 and n<sub>0</sub> ≥ 0 so that for all n ≥ n<sub>0</sub>, we have T(n) ≤ c⋅f(n). In this case, we will say that T is asymptotically upper-bounded by f.\\ 
 \\  \\ 
Line 18: Line 18:
 Consider an algorithm whose running time has the form T(n) = pn<sup>2</sup> + qn + r for positive constants p, q, and r. We’d like to claim that any such function is O(n<sup>2</sup>). To see why, we notice that for all n ≥ 1, we have qn ≤ qn<sup>2</sup>, and r ≤ rn<sup>2</sup>. So we can write T(n) = pn<sup>2</sup> + qn + r ≤ pn<sup>2</sup> + qn<sup>2</sup> + rn<sup>2</sup> = (p + q + r)n<sup>2</sup> for all n ≥ 1. This inequality is exactly what the definition of O(⋅) requires: T(n) ≤ cn<sup>2</sup>, where c = p + q + r.\\  Consider an algorithm whose running time has the form T(n) = pn<sup>2</sup> + qn + r for positive constants p, q, and r. We’d like to claim that any such function is O(n<sup>2</sup>). To see why, we notice that for all n ≥ 1, we have qn ≤ qn<sup>2</sup>, and r ≤ rn<sup>2</sup>. So we can write T(n) = pn<sup>2</sup> + qn + r ≤ pn<sup>2</sup> + qn<sup>2</sup> + rn<sup>2</sup> = (p + q + r)n<sup>2</sup> for all n ≥ 1. This inequality is exactly what the definition of O(⋅) requires: T(n) ≤ cn<sup>2</sup>, where c = p + q + r.\\ 
  
-**Asymptotic Lower Bounds**\\+===Asymptotic Lower Bounds===
 We want to show that this upper bound is the best one possible. To do this, we want to express the notion that for arbitrarily large input sizes n, the function T(n) is at least a constant multiple of some specific function f(n). Thus, we say that T(n) is Ω(f(n)) if there exist constants ε > 0 and n<sub>0</sub> ≥ 0 so that for all n ≥ n<sub>0</sub>, we have T(n) ≥ ε⋅f(n). We want to show that this upper bound is the best one possible. To do this, we want to express the notion that for arbitrarily large input sizes n, the function T(n) is at least a constant multiple of some specific function f(n). Thus, we say that T(n) is Ω(f(n)) if there exist constants ε > 0 and n<sub>0</sub> ≥ 0 so that for all n ≥ n<sub>0</sub>, we have T(n) ≥ ε⋅f(n).
  
-**Asymptotically Tight Bounds**\\+===Asymptotically Tight Bounds===
 If a function T(n) is both O(f(n)) and Ω(f(n)), we say that T(n) is Θ(f(n)). In this case, we say that f(n) is an asymptotically tight bound for T(n). For example, the analysis above shows that T(n) = pn<sup>2</sup> + qn + r is Θ(n<sup>2</sup>).\\  If a function T(n) is both O(f(n)) and Ω(f(n)), we say that T(n) is Θ(f(n)). In this case, we say that f(n) is an asymptotically tight bound for T(n). For example, the analysis above shows that T(n) = pn<sup>2</sup> + qn + r is Θ(n<sup>2</sup>).\\ 
 \\  \\ 
-**Properties of Asymptotic Growth Rates**\\ +===Properties of Asymptotic Growth Rates=== 
 **//Transitivity//**\\  **//Transitivity//**\\ 
 If a function f is asymptotically upper-bounded by a function g, and if g in turn is asymptotically upper-bounded by a function h, then f is asymptotically upper-bounded by h.\\  If a function f is asymptotically upper-bounded by a function g, and if g in turn is asymptotically upper-bounded by a function h, then f is asymptotically upper-bounded by h.\\ 
Line 37: Line 37:
 Suppose that f and g are two functions (taking non-negative values) such that g = O(f). Then f + g = Θ(f). In other words, f is an asymptotically tight bound for the combined function f + g.\\  Suppose that f and g are two functions (taking non-negative values) such that g = O(f). Then f + g = Θ(f). In other words, f is an asymptotically tight bound for the combined function f + g.\\ 
 \\  \\ 
-**Asymptotic Bounds for Some Common Functions**\\ +===Asymptotic Bounds for Some Common Functions=== 
 **//Polynomials//**\\  **//Polynomials//**\\ 
 Let f be a polynomial of degree d, in which the coefficient a<sub>d</sub> is positive. Then f = O(n<sup>d</sup>).\\  Let f be a polynomial of degree d, in which the coefficient a<sub>d</sub> is positive. Then f = O(n<sup>d</sup>).\\ 
Line 56: Line 56:
 In order to asymptotically analyze the running time of an algorithm, we don't necessarily have to program, compile, and execute it. Instead, we must think about how the data will be represented and manipulated in an implementation of the algorithm, so as to bound the number of computational steps it takes. In this representative example, we analyze the data structures needed to implement the Stable Matching Problem.\\  In order to asymptotically analyze the running time of an algorithm, we don't necessarily have to program, compile, and execute it. Instead, we must think about how the data will be represented and manipulated in an implementation of the algorithm, so as to bound the number of computational steps it takes. In this representative example, we analyze the data structures needed to implement the Stable Matching Problem.\\ 
 \\  \\ 
-**Arrays vs. Lists**\\  +===Arrays vs. Lists===  
 \\  \\ 
 **//Arrays//**\\  **//Arrays//**\\ 
Line 72: Line 72:
 Given the advantages and disadvantages of arrays and lists, we may want to convert one format to the other depending on the specific input to a problem. In this case, it is easy to convert between the array and list representations in O(n) time.\\  Given the advantages and disadvantages of arrays and lists, we may want to convert one format to the other depending on the specific input to a problem. In this case, it is easy to convert between the array and list representations in O(n) time.\\ 
  
-**Implementing the Stable Matching Algorithm**\\ +===Implementing the Stable Matching Algorithm=== 
 To ensure the run time of this algorithm is at most O(n<sup>2</sup>), we need to be able to do each of the four things in constant time.\\  To ensure the run time of this algorithm is at most O(n<sup>2</sup>), we need to be able to do each of the four things in constant time.\\ 
   1. Identify a free man.   1. Identify a free man.
Line 85: Line 85:
 This section provides a survey of common running times such as O(n), O(n log n), and O(n<sup>2</sup>), and some of the typical approaches that lead to them. This section provides a survey of common running times such as O(n), O(n log n), and O(n<sup>2</sup>), and some of the typical approaches that lead to them.
  
-**Linear Time**\\ +===Linear Time=== 
 An algorithm that runs in O(n), or linear, time has the property that its running time is at most a constant factor times the size of the input. Examples of algorithms with linear time are **//computing the maximum//** and **//merging two sorted lists//**, which can also be computed as O(n<sup>2</sup>) time.\\  An algorithm that runs in O(n), or linear, time has the property that its running time is at most a constant factor times the size of the input. Examples of algorithms with linear time are **//computing the maximum//** and **//merging two sorted lists//**, which can also be computed as O(n<sup>2</sup>) time.\\ 
  
  
-**O(n log n) Time**\\ +===O(n log n) Time===
 O(n log n) is the running time of any algorithm that splits its input into two equal-sized pieces, solves each piece recursively, and then combines the two solutions in linear time. A well known example of this type of algorithm is sorting, in particular, **//Mergesort//**. There are many algorithms whose most expensive step is to sort the input, so O(n log n) is a relatively common running time.\\  O(n log n) is the running time of any algorithm that splits its input into two equal-sized pieces, solves each piece recursively, and then combines the two solutions in linear time. A well known example of this type of algorithm is sorting, in particular, **//Mergesort//**. There are many algorithms whose most expensive step is to sort the input, so O(n log n) is a relatively common running time.\\ 
  
  
-**Quadratic Time**\\ +===Quadratic Time=== 
 Performing a search over all pairs of input items and spending constant time per pair is a common way which a running time of O(n<sup>2</sup>) arises. Quadratic time also arises naturally from a pair of **//nested loops//**: An algorithm consists of a loop with O(n) iterations, and each iteration of the loop launches an internal loop that takes O(n) time. Multiplying these two factors of n together gives the running time. The brute force algorithm for finding the closest pair is a common example of this kind of algorithm.\\  Performing a search over all pairs of input items and spending constant time per pair is a common way which a running time of O(n<sup>2</sup>) arises. Quadratic time also arises naturally from a pair of **//nested loops//**: An algorithm consists of a loop with O(n) iterations, and each iteration of the loop launches an internal loop that takes O(n) time. Multiplying these two factors of n together gives the running time. The brute force algorithm for finding the closest pair is a common example of this kind of algorithm.\\ 
  
  
-**Cubic Time**\\ +===Cubic Time===
 More elaborate sets of nested loops often lead to algorithms that run in O(n<sup>3</sup>) time. For example, the algorithm for the problem "for pair of sets S<sub>i</sub> and S<sub>j</sub>, determine whether S<sub>i</sub> and S<sub>j</sub> have an element in common" has three nested for loops that each run in linear time. There are algorithms for this problem that improve on O(n<sup>3</sup>) running time, but they are complicated and it is not clear whether the improved algorithms are practical on inputs of reasonable size.\\  More elaborate sets of nested loops often lead to algorithms that run in O(n<sup>3</sup>) time. For example, the algorithm for the problem "for pair of sets S<sub>i</sub> and S<sub>j</sub>, determine whether S<sub>i</sub> and S<sub>j</sub> have an element in common" has three nested for loops that each run in linear time. There are algorithms for this problem that improve on O(n<sup>3</sup>) running time, but they are complicated and it is not clear whether the improved algorithms are practical on inputs of reasonable size.\\ 
  
  
-**O(n<sup>k</sup>) Time**\\ +===O(n^k) Time=== 
 We obtain a running time of O(n<sup>k</sup>) for any constant k when we search over all subsets of size k. The problem of finding independent sets in a graph has O(n<sup>k</sup>) time. For some fixed constant k, we want to know if a given n-node input graph G has an independent set of size k. Since we are treating k as a constant, this quantity is O(n<sup>k</sup>).\\  We obtain a running time of O(n<sup>k</sup>) for any constant k when we search over all subsets of size k. The problem of finding independent sets in a graph has O(n<sup>k</sup>) time. For some fixed constant k, we want to know if a given n-node input graph G has an independent set of size k. Since we are treating k as a constant, this quantity is O(n<sup>k</sup>).\\ 
  
  
-**Beyond Polynomial Time**\\ +===Beyond Polynomial Time===
 Two kinds of bounds that come up often are 2<sup>n</sup> and n!. For example, we are given a graph and want to find an independent set of maximum size. The outer loop in this algorithm will run for 2<sup>n</sup> iterations as it tries all these subsets. Inside the loop, we are checking all pairs from a set S that can be as large as n nodes, so each iteration of the loop takes at most O(n<sup>2</sup>) time. Multiplying these two together, we get a running time of O(n<sup>2</sup>2<sup>n</sup>). Thus we see that 2<sup>n</sup> arises naturally as a running time for a search algorithm that must consider all subsets. Search spaces of size n! tend to arise for one of two reasons. First, n! is the number of ways to match up n items with n other items. The function n! also arises in problems where the search space consists of all ways to arrange n items in order. A common example of this is the Traveling Salesman Problem: given a set of n cities, with distances between all pairs, what is the shortest tour that visits all cities?\\  Two kinds of bounds that come up often are 2<sup>n</sup> and n!. For example, we are given a graph and want to find an independent set of maximum size. The outer loop in this algorithm will run for 2<sup>n</sup> iterations as it tries all these subsets. Inside the loop, we are checking all pairs from a set S that can be as large as n nodes, so each iteration of the loop takes at most O(n<sup>2</sup>) time. Multiplying these two together, we get a running time of O(n<sup>2</sup>2<sup>n</sup>). Thus we see that 2<sup>n</sup> arises naturally as a running time for a search algorithm that must consider all subsets. Search spaces of size n! tend to arise for one of two reasons. First, n! is the number of ways to match up n items with n other items. The function n! also arises in problems where the search space consists of all ways to arrange n items in order. A common example of this is the Traveling Salesman Problem: given a set of n cities, with distances between all pairs, what is the shortest tour that visits all cities?\\ 
  
  
-**Sublinear Time**\\ +===Sublinear Time=== 
 There are cases where one encounters running times that are asymptotically smaller than linear. These situations tend to arise in a model of computation where the input can be "queried" indirectly rather than read completely, and the goal is to minimize the amount of querying that must be done. The best-known example of this is the **//binary search algorithm//**. The running time of binary search is O(log n), because of this successive shrinking of the search region. In general, O(log n) arises as a time bound whenever we’re dealing with an algorithm that does a constant amount of work in order to throw away a constant fraction of the input.\\  There are cases where one encounters running times that are asymptotically smaller than linear. These situations tend to arise in a model of computation where the input can be "queried" indirectly rather than read completely, and the goal is to minimize the amount of querying that must be done. The best-known example of this is the **//binary search algorithm//**. The running time of binary search is O(log n), because of this successive shrinking of the search region. In general, O(log n) arises as a time bound whenever we’re dealing with an algorithm that does a constant amount of work in order to throw away a constant fraction of the input.\\ 
  
courses/cs211/winter2018/journals/boyese/chapter2.1517862522.txt.gz · Last modified: by boyese
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0