Chapter 2

2.1: Computational Tractability

The goal of this course is to write correct and efficient algorithms and this sections goes through how we define efficiency.

Intial attempt at defining efficiancy: “An algorithm is efficient if when, when implemented, it runs quickly on real input instances” What works:

  • Good initial intro to what we want. Need to get things done quickly!

What doesn't work:

  • Too vague: need where and how well
    • Will it be fast on more than just small cases?
    • What are real input instances?

Worst-Case Running Times and Brute-Force Search: “An algorith is efficient if it achieves qualitatviely better worst-case performance, at an analytical level, than brute-force search” Why it works:

  • brute-force search is just going through each option so imporving is needed.
  • shows intrinsic structure and computational tractability about the problem

Why it doesn't works:

  • qualitatively better performance is too vague

Polynomial Time as a Definition of Efficiency “An algorithm is efficient if it has a polynomial running time.” Why it works:

  • very specific
  • makes it easier to prove there is no efficient algorithm

Why it doesn't work

  • Too specific

2.2 Asymptotic Order of Growth

This section discuss how to measure the order of an algorithm and looks at how to find a lower bound, an upper bound, and a tight bound. Our goal is to express the growth rate of running times of algorithms.

Upper bounds:

  • If T(n) is ⇐ c*f(n) for some c > 0 then T is asymptotically bounded above by f.
  • We will denote an upper bound as O. There can be multiple upper bounds.

Lower bound:

  • IF T(n) is >=ξ*f(n) for some ξ>0 then T is asymptotically bounded below by f.
  • we will denote this as Ω

Tight bounds:

  • If T(n) is both O(f(n)) and Ω(f(n)) then we've found a asymptically tight bound
  • T is bound tightly by f(n)
  • our tight bound is denoted as Θ
  • another way to find Θ is through the limit!

Properties of Asymptotic Growth Rates

1. Transitivity: 
  * If f = O(g) and g = O(h) then f = O(h)
  * If f = Ω(g) and g = Ω(h) then f = Ω(h)
  * If f = Θ(g) and g = Θ(h) then f = Θ(h)
2. Sum of Functions
  * Suppose that f and g are two functions such that for some other function h, we have f=O(h) and g = O(h) then f+g=O(h)
  * The same can be said for a finite number of functions
  * g = O(f) then f+g = O(f)

Asymptotic bounds for the some common fuctions

  • polynomials: f is o degree d then f = O(n^d)
  • logs: log_b(n) = O(n^x) where x > 0 and b > 1
  • Exponentials: f(n)=r^n where r > 1 and d > 0 → O(r^n) = n^d

Helpful to have the bounds for common functions, but I am wondering how will we find them in the future?

2.3: Implementing the Stable Matching Algorithm Using Lists and Arrays

Moving from general discussion of GS algorithm to actual implementation. First need to figure out what Data Structures to use.

Arrays:

  • Arrays have length n and A[i] is the ith element in the list.
  • Give direct access to ith elements - there for O(1) time to get value A[i]
  • check if e is in A, (e = A[i] for some i), then it will take O(n) time
  • if elements are sorted check for e in O(logn) using binary search.
  • “An array is less good for dynamically mainting a list of elements that changes over time, such as a l ist of free men”

Lists:

  • can delete or add
  • should use a linked list
  • possible option is doubly linked list:
    • deletion: constant time
    • insertion: constant time
    • ith access: O(i) time. No direct access.

We will need to use both in the stable matching alogorithm

  • identify a free man - use a linked list
  • for a man m identify his highest ranked woman who he has not proposed to - array
  • for a woman w, decide if w is currently engaged and who her current partner is - use an array.
  • for a woman w and two men m and m' we need to be able to check in constant time which W prefers m or m'. - inialize with an n x n array of preferences then we can use constant time access through out algorithm

This gives O(n^2) running time.

Getting into this stuff is more difficult for me. I don't remember running times as much and I try to think of it logically and actually go through the process, but it doesn't always work. Any step by step process would be helpful, but I am guessing it's something you get better at with time.

2.4: A Survey of Common Running Times

This section was easier to read since we just spent so much time on it and I felt pretty comfortable with it.

Linear time

  • O(n)
  • takes a constant times n amount of time
  • computing the maximum - have to go through all n objects
  • Merging Two Sorted Lists - basically have to check the min in each list against the other min. pop off the smaller one and add it to a new list.
  • The book has more info on why this is O(n) look it over if confused: p. 50

O(n*logn) time

  • very common
  • Mergesort algorithm the set of input #s into two equal-sized pieces than solves each half recursively.
  • common because most expensive step is sorting.

Quadratic time:

  • O(n^2)
  • trying to find all the pairs
  • two nested loops

Cubic time

  • O(n^3)
  • trying to find all three tuples
  • “elaborate sets of nested loops”

O(n^k)

  • similar to quadratic and cubic times

Beyond polynomial time

  • find maximum independent set in graph of k nodes
    • talked about this in class
  • n! is huge and usually away to find smaller bounds

Sublinear time

  • Binary search = logn
  • O(logn) arises when we are doing constant amount of work in order to throw away a fraction of the input.

This all makes sense to me, I think the biggest struggle will be when I need to design an alogirthm with a certain running time, but these are the notes I should look at!

2.5: A More Complex Data Structure Priority Queues

Priority Queue:

  • Designed for applications in which elements have a priority value, or key, and each time we need to select an element from S, we want to take the one with highest priority.
  • A priority queue is a data structure that maintains a set of elements S, where each element v in S has an associated value key(v) that denotes the priority of element v; smaller keys represent higher priorities.

Use a heap to implement a priority queue

  • gets us the O(log n)
  • For every element v, at a node i, the element w at i's parents satisfies key(w) ⇐ key(v)
  • leftchild(i) = 2i, rightchild(i) = 2i+1
  • Heapify up and heapify down - allows for log n

Use these operations:

  • StartHeap(N)
  • Insert(H,V)
  • FindMin(H)
  • Delete(H,i)
  • ExtractMin(H)

I think heaps are soooo fun! I enjoyed this chapter because it layed out the code in a pretty readabe way.

courses/cs211/winter2012/journals/carrie/ch2.txt · Last modified: 2012/01/23 14:08 by hopkinsc
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0