This is an old revision of the document!


Section 2.1 Computational Tractability

The first attempt to describe efficiency is that “an algorithm is efficient if, when implemented, it runs quickly on real input instances.” This is good, but it fails to address on what type of machine the algorithm is being run, how well it runs, and the scalability it possesses.

We can consider worst-case runtimes and brute force methods on the way to a better definition of efficiency. It is important to use a worst-case run time to compare performance of algorithms because using the alluring average-case ignites a debate over how to arrive at the random values to input into a function. A good benchmark with which to compare an algorithm's worst-case run time is the run time of its brute-force method, which traverses the entire search space for a solution. These two components lead us to the next definition of efficiency, which is: “an algorithm is efficient if it achieves qualitatively better worst-case performance, at an analytical level, than brute-force search.” A qualm we may have with this is how to define “qualitatively better running time.”

The use of polynomial time gives us the ability to evaluate algorithms using quantitative analysis. The desirable scaling property of algorithms states that there exists constants c > 0 and d > 0 whose running time is bounded by CNd, thus the algorithm slows down by a constant factor of 2d when N increases to 2N and this yields a third definition of efficiency: “an algorithm is efficient if it has a polynomial running time.” One last, unexpected result of this definition is that we now have a definition that can be tested quantitatively and proven or disproven.

courses/cs211/winter2018/journals/holmesr/section_2.1.1516156475.txt.gz · Last modified: by holmesr
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0