This is an old revision of the document!


Chapter 2

2.1: Computational Tractability

This section reminded me of what we learned in the CSCI 112 course at W&L. This section focused on runtime and memory allocation in terms of efficiency. It was fairly straightforward and easy to read. Reading about runtime for algorithms, it reinforced what I had learned in my previous computer science courses (big O notation). The methodology behind finding efficiency is to start by analyzing worst-case run times. For example, in the stable matching problem, we should consider the search space (which, in this case, is huge- n! possible perfect matchings).

2.2: Asymptotic Order of Growth

This section was a lot more difficult to understand than any of the previous sections. On a scale of 1-10, this section was about a 6 for me in regards to how readable it was. This section mainly covered upper and lower bounds for some function, f(n) where T(n) is the worst-case. For upper bounds, T(n) is O(f(n)). For lower bounds, T(n) = Ω (f(n))). If T(n) is in between these two values, than it is considered Θ(f(n))) which means that f(n) is an asymptotically tight bound for T(n).

As I was unable to attend the last class due to illness, this discussion very much confused me. The following discussion, however, did make some sense. These functions have certain properties. Two of these properties includes the transitive property and the sums of functions. This makes sense as these properties apply to any mathematical function, and should therefore apply to any algorithm. Furthermore, this section discussed three main types of notations: polynomials, logarithms, and exponentials. Working with various Big O notations in CSCI 112 will help to remember these various notations, but it may be more difficult to try and remember the asymptotic bounds.

2.3: Implementing the Stable Matching Algorithm Using Lists and Arrays

This section was very easy to read and understand, and on a scale from 1-10 was most likely a 9 on readability. In my opinion, this section was very important as it covered how one would go about representing various lists of data using array lists and linked lists. This is very important as the two structures are more efficient in different areas. For example, an array is less efficient in maintaining a list that constantly changes size while a linked list is less efficient in finding an element at the 'ith' position. Because in any algorithm we would like the most optimal running time and memory allocation, we would need to implement a combination of the two structures to represent various lists. To keep the Gale-Shapely algorithm at a O(n^2) run time there are several actions in the algorithm that need to be done in constant time. This is achieved through the use of both an array list and a linked list. After our discussions in class and reading this section, it is very clear what type of structure should be used for each list and why.

courses/cs211/winter2018/journals/cohene/home/chapter2.1516481270.txt.gz · Last modified: by cohene
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0