This is an old revision of the document!


Chapter 2

<Enter overview of chapter 2 here>

Section 2.1(Computational Tractability)

This section is all about efficiency, even taking several attempts at defining it. The first attempt reads as follows:

“Proposed Definition of Efficiency (1): An algorithm is efficient if, when implemented, it runs quickly on in real input instances.

This definition does not work because it has no concept of scale. An inefficient process could meet this standard if run on a very small data sample, since the computer could process things quickly. The next attempt reads as:

“Proposed Definition of Efficiency (2): An algorithm is efficient if it achieves qualitatively better worst-case performance, at an analytical level, than brute-force search.

This definition is simply too vague to work. It does not provide guidelines for that counts as “qualitatively better worst-case performance”, and is thus subjective. The final definition fixes this vagueness by providing a way of quantifying efficiency:

“Proposed Definition of Efficiency (3): An algorithm is efficient if it has a polynomial running time.

This is the ideal definition for our purposes because of its specificity. It allows us to look at algorithms quantitatively so that we can determine whether a process is feasible. This definition also makes problems negatable. Thus, using this definition, we can decide when there is no efficient algorithm for a problem.

Section 2.2(Asymptotic Order of Growth)

This section looks at how an algorithm's worst-case running time grows as the input size grows.

courses/cs211/winter2018/journals/hornsbym/chapter_2.1516078509.txt.gz · Last modified: by hornsbym
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0