Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| courses:cs211:winter2018:journals:cantrella:chapter_2 [2018/01/16 04:17] – [Section 2.2] cantrella | courses:cs211:winter2018:journals:cantrella:chapter_2 [2018/01/29 00:32] (current) – [Section 2.5] cantrella | ||
|---|---|---|---|
| Line 6: | Line 6: | ||
| While the book admits there are potentially better ways to define the efficiency of algorithms, years of trial and error have found that defining efficiency in terms of polynomial time is //pretty good//. Using this definition, it becomes possible to find that some particular algorithm cannot perform in polynomial time and is thereby inefficient. | While the book admits there are potentially better ways to define the efficiency of algorithms, years of trial and error have found that defining efficiency in terms of polynomial time is //pretty good//. Using this definition, it becomes possible to find that some particular algorithm cannot perform in polynomial time and is thereby inefficient. | ||
| + | |||
| + | This section was extremely logical and did not include too much of the mathematical syntax that always ends up confusing me. I rate this section an 8 for interestingness and a 9 for readability. | ||
| ===== Section 2.2 ===== | ===== Section 2.2 ===== | ||
| - | Section 2.2 introduces O, Ω, and Θ notation and how they can be used to describe the maximum, minimum, and exact run time of algorithms as their inputs scale up or down. O(//f(n)//) for some function f(n) means that the algorithm in question will run at most //f(n)// time for any value of //n//. | + | Section 2.2 introduces O, Ω, and Θ notation and how they can be used to describe the maximum, minimum, and exact run time of algorithms as their inputs scale up or down. O(//f(n)//) for some function f(n) means that the algorithm in question will run at most // |
| + | |||
| + | As I understood the section, it seemed that if some algorithm is both O(//f(n)//) and Ω(// | ||
| + | |||
| + | The chapter goes on to prove some of the basic identities of the three notations. They go on to prove transitivity and sums of functions, then show that each notation can accommodate polynomials, | ||
| + | |||
| + | The beginning of the chapter was extremely interesting, | ||
| + | ===== Section 2.3 ===== | ||
| + | Section 2.3 goes over what we discussed in class, that is, how to implement the Gale-Shapely algorithm using lists and arrays. More specifically, | ||
| + | |||
| + | I found reading the book to be more understandable than the in-class discussion we had on Friday. This isn't to say that the discussion of Friday wasn't helpful, only that it made more sense reading the chapter after we had our discussion. By the end of class on Friday, I felt I had a good grasp on the data types used to implement the algorithm. After reading the chapter, I understood the reasoning behind each data structure. | ||
| + | |||
| + | I would say this chapter was 6 on an interesting scale (seeing as we had already discussed it), but an 8 on the readability scale. Breaking each function down by how it was implemented made a lot of sense to me. While we did do the same thing in class, it was a bit harder to follow simply because erroneous answers kept being proposed which threw me off a little bit. | ||
| + | ===== Section 2.4 ===== | ||
| + | Section 2.4 describes the most commonly found algorithmic run-times, which include; O(//n//, O(// | ||
| + | |||
| + | The one algorithm runtime I have always struggled with the most has been O(// | ||
| + | |||
| + | I would give this section a 6 on the interesting scale and an 8 on the readability scale. | ||
| + | ===== Section 2.5 ===== | ||
| + | Section 2.5 introduces the first truly complex data structure, the priority queue. A priority queue is container type that organizes elements based on their size. When values are pulled from the queue, they are taken based on their priority. Lower values have a higher priority. The actual data structure that implements said priority queue is called a heap. A heap can be visualized as a kind of binary array, in which every node has one parent and two children, and the parent node is always greater than both child nodes. | ||
| + | |||
| + | The advantages of a heap are that it can add and remove elements in O(log//n//) time, making it more efficient than both lists and arrays. It does this by utilizing the heapify-up and heapify-down functions, both of which we covered in class. | ||
| + | |||
| + | This chapter helped to reinforce my understanding of priority queues and their heap-like implementation. I am curious if a priorit queue could be implemented using a different kind of data structure. | ||
| + | I give this chapter a 7 on the interesting scale and 7 on the readability scale. | ||
