Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision |
courses:cs211:winter2012:journals:garrett:entries:week_1 [2012/01/18 04:42] – [Chapter 2: Basics of Algorithm Analysis] garrettheath4 | courses:cs211:winter2012:journals:garrett:entries:week_1 [2012/03/07 03:45] (current) – enabled discussion garrettheath4 |
---|
=== Computational Tractability === | === Computational Tractability === |
It is difficult to define efficiency, much less determine if a given algorithm is efficient or not. It may be tempting to just compare an algorithm's running time with its equivalent brute-force algorithm, but how bad should the brute-force algorithm be and what kind of scale do you use when you say it's "this much better than the brute-force algorithm"? The worst-case running times for an algorithm are useful, but those times are still dependent on the type and size of the problem. To determine if an algorithm is efficient, we must compare it to a consistent base case or common characteristic. That base case is a polynomial-time algorithm in which if the size of a problem doubles, the increase in computation needed to run such an algorithm is constant. | It is difficult to define efficiency, much less determine if a given algorithm is efficient or not. It may be tempting to just compare an algorithm's running time with its equivalent brute-force algorithm, but how bad should the brute-force algorithm be and what kind of scale do you use when you say it's "this much better than the brute-force algorithm"? The worst-case running times for an algorithm are useful, but those times are still dependent on the type and size of the problem. To determine if an algorithm is efficient, we must compare it to a consistent base case or common characteristic. That base case is a polynomial-time algorithm in which if the size of a problem doubles, the increase in computation needed to run such an algorithm is constant. |
| |
| === Asymptotic Order of Growth === |
| If we are analyzing the time behavior of an algorithm, it would be nice if we could provide some useful general characteristics about it that could be used to relate to other algorithms with a similar time behavior. A good way to do this is to put bounds on the time characteristic of the algorithm, being an upper bound and a lower bound of the algorithm's time complexity. The upper bound of an algorithm's time complexity is notated as ''O(f(n))'' and is known as **"Big-O" notation**. Determining the "Big-O" function of an algorithm means that for any problem size the algorithm will not take more computation than the given function plus a constant as the size of the problem grows to infinity. Similarly, the lower bound of an algorithm's time complexity is notated as ''Ω(f(n))'' and is known as **"Big-Omega" notation**. "Big-Omega" notation is useful because it is indicative of the baseline "best-case" scenario of an algorithm of any problem size. |
| |
| |
| |
| ~~DISCUSSION~~ |