Chapter 2

My notes on Chapter 2 readings

2.1: Computational Tractability

  • We define the efficiency of an algorithm as its worst-case performance in terms of running time compared to the time necessary for a brute-force search, and generally aim to achieve polynomial running time over other sorts of runtimes. While the difference between an n^2 algorithm and an n^5 algorithm may seem big, we'd obviously significantly prefer n^5 over 2^n running time.
  • I found this section readable, albeit a bit dry and definitely more interesting in class than in a book. 8/10.

2.2: Asymptotic Order of Growth

  • More specifically than in the previous section, this section defines mathematically how we understand efficiency. We say that as an algorithm A's worst-case runtime on an input set of size n grows at a rate proportional or closely modeled by some function f(n). Thus algorithm A(n) is asymptotically upper-bounded by all functions O(f(n)) satisfying the following:
    • Are is a constant greater than zero such that for all input sizes larger that some value, A(n) is less than or equal to f(n) times that constant.
  • We define asymptotic lower-bounding as similar, with the exception that our algorithm would run in greater than or equal time for all numbers above some specific input size. The asymptotic lower bound is given by Ω(f(n)).
  • We find the asymptotically tight bound as some bound that lies both the set of all upper bounds and all lower bounds. This is helpful because, yes, a lot of algorithms are upper bounded by 10000*e^1000, and a lot of algorithms are lower-bounded by constant runtime, but knowing that both hold true for an algorithm tells us next to nothing, and if eventually an algorithm's runtime is well modeled by x^2, that would help to know.
  • Growth rates are transitive, and we can add them and get the maximum of the two growth rates. This will be helpful in proving the runtime of algorithms that depend on multiple algorithms whose bounds we know, I would imagine.
  • I enjoyed this section a lot, actually. 9/10.

2.3: Implementing the Stable Matching Algorithm Using Lists and Arrays

  • This section spends a lot of time explaining the differences between arrays and lists (p43-45), the gist of which is as follows:
    • Arrays:
      • Fixed size
      • Constant-time access to element at position i
      • O(n)-time access to determine whether an element is in unsorted array
      • O(log(n))-time access to determine whether an element is in sorted array
    • Linked lists:
      • Variable size
      • O(i)-time access to element at position i
      • O(n)-time access to determine whether an element is in list
      • Constant-time addition and deletion of elements
  • Therefore, in order to minimize our runtime, we use the following data structures for the Gale-Shapely algorithm:
    • Integers to represent the men & women
    • Array of arrays to represent the preference lists
    • Lists to represent the unmatched men
    • Map man's integer ID → array of integers to represent who each man has proposed to
    • Two arrays to represent the engagements.
  • Thus we can implement the G-S algorithm in O(n^2) time
  • I mostly skimmed this section because it wasn't that interesting and it was significantly more captivating as a class discussion. 5/10.

2.4: A Survey of Common Running Times

  • This section discusses what algorithms tend to take specific running times
    • Linear
      • Finding the maximum of a list or array of numbers
      • Merging two sorted lists
    • Linearithmic
      • Sorting: merge sort
    • Quadratic
      • Anything with two nested loops
    • Cubic
      • Determining whether sets are disjoint
    • Polynomial - O(n^k)
      • Brute-force searching over all subsets of size k
    • Exponential
      • Search algorithm over all subsets of size n
    • Factorial
      • Travelling salesman-like problems
    • Sublinear - O(log(n))
      • Binary search
  • This section was informative and didn't try too hard. 7/10.

2.5: Priority Queues

  • This section discusses priority queues, which is a set where each element has some key indicating how important it is. The lower the key, the higher the priority.
  • We implement priority queues using a heap, or a balanced binary tree.
  • The implementation details span from pg 60-64.
  • This section was not interesting to read. 4/10.
courses/cs211/winter2014/journals/haley/chapter2.txt · Last modified: 2014/01/22 02:13 by archermcclellanh
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0