Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revision | |
courses:cs211:winter2012:journals:garrett:entries:week_1 [2012/01/27 17:30] – added discussion module garrettheath4 | courses:cs211:winter2012:journals:garrett:entries:week_1 [2012/03/07 03:45] (current) – enabled discussion garrettheath4 |
---|
=== Asymptotic Order of Growth === | === Asymptotic Order of Growth === |
If we are analyzing the time behavior of an algorithm, it would be nice if we could provide some useful general characteristics about it that could be used to relate to other algorithms with a similar time behavior. A good way to do this is to put bounds on the time characteristic of the algorithm, being an upper bound and a lower bound of the algorithm's time complexity. The upper bound of an algorithm's time complexity is notated as ''O(f(n))'' and is known as **"Big-O" notation**. Determining the "Big-O" function of an algorithm means that for any problem size the algorithm will not take more computation than the given function plus a constant as the size of the problem grows to infinity. Similarly, the lower bound of an algorithm's time complexity is notated as ''Ω(f(n))'' and is known as **"Big-Omega" notation**. "Big-Omega" notation is useful because it is indicative of the baseline "best-case" scenario of an algorithm of any problem size. | If we are analyzing the time behavior of an algorithm, it would be nice if we could provide some useful general characteristics about it that could be used to relate to other algorithms with a similar time behavior. A good way to do this is to put bounds on the time characteristic of the algorithm, being an upper bound and a lower bound of the algorithm's time complexity. The upper bound of an algorithm's time complexity is notated as ''O(f(n))'' and is known as **"Big-O" notation**. Determining the "Big-O" function of an algorithm means that for any problem size the algorithm will not take more computation than the given function plus a constant as the size of the problem grows to infinity. Similarly, the lower bound of an algorithm's time complexity is notated as ''Ω(f(n))'' and is known as **"Big-Omega" notation**. "Big-Omega" notation is useful because it is indicative of the baseline "best-case" scenario of an algorithm of any problem size. |
| |
| |
| |
~~DISCUSSION~~ | ~~DISCUSSION~~ |