Differences
This shows you the differences between two versions of the page.
| Next revision | Previous revision | ||
| courses:cs211:winter2018:journals:goldm:ch2 [2018/01/17 00:22] – created goldm | courses:cs211:winter2018:journals:goldm:ch2 [2018/01/29 23:52] (current) – [2.5: A More Complex Data Structure: Priority Queues] goldm | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| - | adfa | + | ====== Chapter 2 ====== |
| + | |||
| + | ===== 2.1: Computational Tractability ===== | ||
| + | |||
| + | |||
| + | The section begins by coming up with a working definition of an efficient algorithm. It begins with saying an algorithm is efficient if it runs quickly on real-life inputs. Next, after discussing brute force, it amends this to saying an algorithm is efficient if it performs better than brute force search. Lastly, the text discusses the concept of polynomial time and amends the definition of efficiency to if an algorithm runs in polynomial time. Lastly, it discusses the different variations of run-time complexity and how they correspond to real time. | ||
| + | |||
| + | I give this section a 7/10 in terms of being interesting as it did not simply in depth describe an algorithm. | ||
| + | |||
| + | ===== 2.2: Asymptotic Order of Growth Notation ===== | ||
| + | |||
| + | This section goes into how to specifically talk about run-times. For instance, it discusses disregarding constant factors and terms not of the highest order coefficient. In building up this description, | ||
| + | |||
| + | This section, once again, did not merely detail a long-winded algorithm. As such, it gets a 7/10. | ||
| + | |||
| + | ===== 2.3: Implementing the Stable Matching Problem Using Lists and Arrays ===== | ||
| + | |||
| + | The reading begins to bridge the gap from analysis on paper to actual efficient implementation of algorithms. In the section, the complexity of different array methods are discussed. As a result, the shortcomings of arrays, specifically dynamically managing the data in them, comes to light. As such, another method of implementation, | ||
| + | |||
| + | One thought that I have in response to this section pertains to hashing. Through internships and discussion with upper level students, I have heard a lot about how often times things like hash sorts and hash maps can be significantly more efficient than other mappings and sorts, but I have not really been exposed to them through my classes at W&L. So, I am curious what benefits/ | ||
| + | |||
| + | I generally find data structures and implementation extremely important, so I give this section a 7/10. | ||
| + | |||
| + | ===== 2.4: A Survey of Common Running Times ===== | ||
| + | |||
| + | True to its name, this section discusses the aspects of different algorithms that lead to their run times. As such, one can expound to try and understand why other algorithms end up running in these common times. In discussing linear algorithms, the most common thing to lead to linear run time is going over each element of the input once and performing constant time computations on each element. Additionally, | ||
| + | |||
| + | I found this section quite exhaustive as it both provided examples and logic. As such, I do not really have any questions about it. It serves to but a face and reason to the typical names we hear as common run times. | ||
| + | |||
| + | This section did feel as if it dragged on a bit, so I give it a 5/10. | ||
| + | |||
| + | ===== 2.5: A More Complex Data Structure: Priority Queues ===== | ||
| + | |||
| + | The section goes over what a priority queue is, a set where each element has an associated key that allows it to be sorted by priority; the lower the key the higher the priority. An example of when this is useful is managing computer processes. It goes on to discuss an implementation that allows elements to be added, removed, and have the minimum element selected in logarithmic time. Having the aforementioned run times allows priority queue operations to sort a set. In order to implement the priority queue, the book uses a heap data structure. A heap structure implements the algorithm heapify up to insert elements in logarithmic time. In order to remove the minimum element of the heap, we use ExtractMin. After doing this, if the key of the replacement element is too small, we use heapify up. If it is too big, we use heapify down. Both of these functions fix the heap in logn time. Thus, we can delete an element in logn time. The section then goes on to explain how we can implement the priority queue with a heap and discusses the necessary methods for doing so. Next, to add extra functionality to the queue, specifically, | ||
| + | |||
| + | This section is important in giving an example of how to use an appropriate data structure to efficiently implement a more complicated idea. I would like to see more and more examples of this throughout the text as I would consider it integral to developing useful algorithms in the real world, not just ones that look good on paper. | ||
| + | |||
| + | Overall, while important, as we have learned this all in class, I found the reading a little slow. Overall, it earns a 4/10. | ||
