Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
courses:cs211:winter2012:journals:richard:chap2 [2012/02/01 02:55] admincourses:cs211:winter2012:journals:richard:chap2 [2012/02/01 02:56] (current) – [2.1] admin
Line 5: Line 5:
 The authors start by talking about how, in designing algorithms, efficiency is key. But what exactly does efficiency mean? Well, that is not an easy question. In fact, it's so difficult a question to answer that it took the authors of this book--with their super-smart brains and their Ph.Ds from M.I.T. and Eötvös--three whole tries to make up their minds. Or maybe that's just them doing what they said they'd do in the preface and approach algorithm design like a "thought process." It reminds me a little bit of Goldilocks and the Three Bears. The first working definition of efficiency (runs fast on real inputs) is too inflexible. The second working definition (qualitatively better worse-case performance than brute-force) is too vague. And the third proposed definition is just right...well not really. The authors point out that there are of course, exceptions to the definition they settle on (running an algorithm in "polynomial time.") And that technically n^200 shouldn't be considered efficient. But they say it usually works, so that's a good definition. Plus it's negatable. The authors start by talking about how, in designing algorithms, efficiency is key. But what exactly does efficiency mean? Well, that is not an easy question. In fact, it's so difficult a question to answer that it took the authors of this book--with their super-smart brains and their Ph.Ds from M.I.T. and Eötvös--three whole tries to make up their minds. Or maybe that's just them doing what they said they'd do in the preface and approach algorithm design like a "thought process." It reminds me a little bit of Goldilocks and the Three Bears. The first working definition of efficiency (runs fast on real inputs) is too inflexible. The second working definition (qualitatively better worse-case performance than brute-force) is too vague. And the third proposed definition is just right...well not really. The authors point out that there are of course, exceptions to the definition they settle on (running an algorithm in "polynomial time.") And that technically n^200 shouldn't be considered efficient. But they say it usually works, so that's a good definition. Plus it's negatable.
  
-**2.2**+===== 2.2 ===== 
  
 First, the authors state that, in measuring efficiency of an algorithm, it's not really important to count the exact number of steps you have for a given input size, it's sufficient to describe it in less detail than that. Then, they introduce the idea of asymptotic upper bounds--or big O...and define it mathematically, and give an example to illustrate. They explain that you can have many upper bounds...big O isn't necessarily going to be the smallest upper bound for the algorithm. First, the authors state that, in measuring efficiency of an algorithm, it's not really important to count the exact number of steps you have for a given input size, it's sufficient to describe it in less detail than that. Then, they introduce the idea of asymptotic upper bounds--or big O...and define it mathematically, and give an example to illustrate. They explain that you can have many upper bounds...big O isn't necessarily going to be the smallest upper bound for the algorithm.
courses/cs211/winter2012/journals/richard/chap2.1328064924.txt.gz · Last modified: 2012/02/01 02:55 by admin
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0