Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2012:journals:garrett:entries:week_1 [2012/01/18 04:23] – [Chapter 1: Introduction: Some Representative Problems] garrettheath4courses:cs211:winter2012:journals:garrett:entries:week_1 [2012/03/07 03:45] (current) – enabled discussion garrettheath4
Line 22: Line 22:
 The **Competitive Facility Location Problem** is a problem where two competing sets are trying to claim weighted nodes on a graph that are not adjacent to nodes already claimed by the other set. The **Competitive Facility Location Problem** is a problem where two competing sets are trying to claim weighted nodes on a graph that are not adjacent to nodes already claimed by the other set.
 ==== Chapter 2: Basics of Algorithm Analysis ==== ==== Chapter 2: Basics of Algorithm Analysis ====
-FIXME+=== Computational Tractability === 
 +It is difficult to define efficiency, much less determine if a given algorithm is efficient or not.  It may be tempting to just compare an algorithm's running time with its equivalent brute-force algorithm, but how bad should the brute-force algorithm be and what kind of scale do you use when you say it's "this much better than the brute-force algorithm"?  The worst-case running times for an algorithm are useful, but those times are still dependent on the type and size of the problem.  To determine if an algorithm is efficient, we must compare it to a consistent base case or common characteristic.  That base case is a polynomial-time algorithm in which if the size of a problem doubles, the increase in computation needed to run such an algorithm is constant.
  
 +=== Asymptotic Order of Growth ===
 +If we are analyzing the time behavior of an algorithm, it would be nice if we could provide some useful general characteristics about it that could be used to relate to other algorithms with a similar time behavior.  A good way to do this is to put bounds on the time characteristic of the algorithm, being an upper bound and a lower bound of the algorithm's time complexity.  The upper bound of an algorithm's time complexity is notated as ''O(f(n))'' and is known as **"Big-O" notation**.  Determining the "Big-O" function of an algorithm means that for any problem size the algorithm will not take more computation than the given function plus a constant as the size of the problem grows to infinity.  Similarly, the lower bound of an algorithm's time complexity is notated as ''Ω(f(n))'' and is known as **"Big-Omega" notation**.  "Big-Omega" notation is useful because it is indicative of the baseline "best-case" scenario of an algorithm of any problem size.
 +
 +
 +
 +~~DISCUSSION~~
courses/cs211/winter2012/journals/garrett/entries/week_1.1326860634.txt.gz · Last modified: 2012/01/18 04:23 by garrettheath4
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0