Differences
This shows you the differences between two versions of the page.
| Next revision | Previous revision | ||
| courses:cs211:winter2018:journals:goldm:ch2.1 [2018/01/17 00:32] – created goldm | courses:cs211:winter2018:journals:goldm:ch2.1 [2018/01/23 01:23] (current) – goldm | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| + | ====== Chapter 2 ====== | ||
| + | |||
| + | ===== 2.1: Computational Tractability ===== | ||
| + | |||
| + | |||
| The section begins by coming up with a working definition of an efficient algorithm. It begins with saying an algorithm is efficient if it runs quickly on real-life inputs. Next, after discussing brute force, it amends this to saying an algorithm is efficient if it performs better than brute force search. Lastly, the text discusses the concept of polynomial time and amends the definition of efficiency to if an algorithm runs in polynomial time. Lastly, it discusses the different variations of run-time complexity and how they correspond to real time. | The section begins by coming up with a working definition of an efficient algorithm. It begins with saying an algorithm is efficient if it runs quickly on real-life inputs. Next, after discussing brute force, it amends this to saying an algorithm is efficient if it performs better than brute force search. Lastly, the text discusses the concept of polynomial time and amends the definition of efficiency to if an algorithm runs in polynomial time. Lastly, it discusses the different variations of run-time complexity and how they correspond to real time. | ||
| I give this section a 7/10 in terms of being interesting as it did not simply in depth describe an algorithm. | I give this section a 7/10 in terms of being interesting as it did not simply in depth describe an algorithm. | ||
| + | |||
| + | ===== 2.2: Asymptotic Order of Growth Notation ===== | ||
| + | |||
| + | This section goes into how to specifically talk about run-times. For instance, it discusses disregarding constant factors and terms not of the highest order coefficient. In building up this description, | ||
| + | |||
| + | This section, once again, did not merely detail a long-winded algorithm. As such, it gets a 7/10. | ||
| + | |||
| + | ===== 2.3: Implementing the Stable Matching Problem Using Lists and Arrays ===== | ||
| + | |||
| + | The reading begins to bridge the gap from analysis on paper to actual efficient implementation of algorithms. In the section, the complexity of different array methods are discussed. As a result, the shortcomings of arrays, specifically dynamically managing the data in them, comes to light. As such, another method of implementation, | ||
| + | |||
| + | One thought that I have in response to this section pertains to hashing. Through internships and discussion with upper level students, I have heard a lot about how often times things like hash sorts and hash maps can be significantly more efficient than other mappings and sorts, but I have not really been exposed to them through my classes at W&L. So, I am curious what benefits/ | ||
| + | |||
| + | I generally find data structures and implementation extremely important, so I give this section a 7/10. | ||
