More specifically than in the previous section, this section defines mathematically how we understand efficiency. We say that as an algorithm A's worst-case runtime on an input set of size n grows at a rate proportional or closely modeled by some function f(n). Thus algorithm A(n) is asymptotically upper-bounded by all functions O(f(n)) satisfying the following:
We define asymptotic lower-bounding as similar, with the exception that our algorithm would run in greater than or equal time for all numbers above some specific input size. The asymptotic lower bound is given by Ω(f(n)).
We find the asymptotically tight bound as some bound that lies both the set of all upper bounds and all lower bounds. This is helpful because, yes, a lot of algorithms are upper bounded by 10000*e^1000, and a lot of algorithms are lower-bounded by constant runtime, but knowing that both hold true for an algorithm tells us next to nothing, and if eventually an algorithm's runtime is well modeled by x^2, that would help to know.
Growth rates are transitive, and we can add them and get the maximum of the two growth rates. This will be helpful in proving the runtime of algorithms that depend on multiple algorithms whose bounds we know, I would imagine.
I enjoyed this section a lot, actually. 9/10.