Chapter 2.4: A Survey of Common Running Times

Linear Time

An algorithm running in O(n) time runs at at most a constant factor times the input size (n). In other words, the algorithm processes the input in a single pass. For example:

  • Computing the Maximum - given a list of n numbers, it is possible to find the maximum in linear time. We can process the numbers a_1, a_2,…,a_n in order, keeping a running estimate of the maximum as we go. Each time we encounter a number a_i, we compare it with our estimate to see if a_i is larger, and if so, we update the estimate to a_i.
  • Merging Two Sorted Lists - given two sorted lists, we can compare the smallest values of each and then add the smaller of the two to a new list. The “smallest” pointer is then reassigned to the next smallest element in the list that just had an element removed. Then the process begins again.

O(nlogn) Time

An algorithm running in O(nlogn) time splits its input into two equal-sized pieces, solves each piece recursively (O(logn)), and then combines the two solutions in linear time (O(n)). Many sorting algorithms run in this time. The merge sort algorithm divides the set of input numbers in two separate pieces, sorts each half recursively, and then merges the two halves into a single, sorted list. Many algorithms have this running time because sorting is their most expensive step.

Quadratic Time

One of the most common ways in which a running time is O(n^2) is when performing a search over all pairs of input items and spending constant time per pair. Quadratic time also arises naturally from nested loops; i.e. an algorithm consists of a loop with O(n) iterations, and each iteration of the loop launches an internal loop that takes O(n) time.

Cubic Time

More elaborate sets of nested loops lead to algorithms that run in O(n^3) time. For example, for a pair of sets, S_i and S_j, determining whether a pair of sets have an element in common runs in cubic time. The outermost loop runs through each set S_i, the middle loop runs through each set S_j, and the innermost loop runs through each element of S_i, checking if it's in S_j. Each set has a maximum size of O(n) so that is the RT of the inner loop. Looping over S_j takes O(n) iterations and looping over S_i also involves O(n) iterations. So total running time is O(n^3).

O(n^k) Time

Similarly to O(n^2) algorithms, by performing brute-force search over all pairs formed from a set of n items, we can get a running time of O(n^k) for any constant k when we search over all subsets of size k.

Beyond Polynomial Time

There are two other kinds of bounds that come up frequently: exponential (2^n) and factorial (n!).

Sublinear Time

In some cases, running times can be asymptotically smaller than linear time. This happens when input can be queried indirectly rather than read through completely. The binary search algorithm is the best known example of this case.

This section was pretty easy to read and understand. I don't really see why it was important to separate all of the polynomial time cases, unless it just made it easier to illustrate with examples. I would rate this section an 8.

courses/cs211/winter2014/journals/alyssa/chapter_2.4.txt · Last modified: 2014/01/21 23:05 by hardnetta
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0