Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
courses:cs211:winter2018:journals:cantrella:chapter_2 [2018/01/16 04:26] – [Section 2.2] cantrellacourses:cs211:winter2018:journals:cantrella:chapter_2 [2018/01/29 00:32] (current) – [Section 2.5] cantrella
Line 6: Line 6:
  
 While the book admits there are potentially better ways to define the efficiency of algorithms, years of trial and error have found that defining efficiency in terms of polynomial time is //pretty good//. Using this definition, it becomes possible to find that some particular algorithm cannot perform in polynomial time and is thereby inefficient. While the book admits there are potentially better ways to define the efficiency of algorithms, years of trial and error have found that defining efficiency in terms of polynomial time is //pretty good//. Using this definition, it becomes possible to find that some particular algorithm cannot perform in polynomial time and is thereby inefficient.
 +
 +This section was extremely logical and did not include too much of the mathematical syntax that always ends up confusing me. I rate this section an 8 for interestingness and a 9 for readability.
 ===== Section 2.2 ===== ===== Section 2.2 =====
 Section 2.2 introduces O, Ω, and Θ notation and how they can be used to describe the maximum, minimum, and exact run time of algorithms as their inputs scale up or down. O(//f(n)//) for some function f(n) means that the algorithm in question will run at most //f(n)// times for any value of //n//. Ω(//f(n)//) means that the algorithm will run at least //f(n)// times for any value //n//. While I understood both O and Ω notation, I am not completely sure I have Θ notation down pat. Section 2.2 introduces O, Ω, and Θ notation and how they can be used to describe the maximum, minimum, and exact run time of algorithms as their inputs scale up or down. O(//f(n)//) for some function f(n) means that the algorithm in question will run at most //f(n)// times for any value of //n//. Ω(//f(n)//) means that the algorithm will run at least //f(n)// times for any value //n//. While I understood both O and Ω notation, I am not completely sure I have Θ notation down pat.
Line 14: Line 16:
  
 The beginning of the chapter was extremely interesting, I did not know two other notations for the run-time of algorithms even existed. The proofs, however, were relatively complex and I would be lying if I said that I understood each one of them after reading through this section. However, I bet knowing that each notation has the capabilities proved in this section will prove invaluable in the future when we write our own proofs. I give this section a 7 for how interesting but only a 3 for readability. The beginning of the chapter was extremely interesting, I did not know two other notations for the run-time of algorithms even existed. The proofs, however, were relatively complex and I would be lying if I said that I understood each one of them after reading through this section. However, I bet knowing that each notation has the capabilities proved in this section will prove invaluable in the future when we write our own proofs. I give this section a 7 for how interesting but only a 3 for readability.
 +===== Section 2.3 =====
 +Section 2.3 goes over what we discussed in class, that is, how to implement the Gale-Shapely algorithm using lists and arrays. More specifically, the book uses fixed-size arrays and linked lists, and one 2D array to ensure the algorithm runs at O(n^2). The fixed-size arrays are used to implement the lists of who proposed to who and the final list of engagements. Arrays make sense in this situation since each list will only hold a maximum number of elements and array's indexing capabilities perform at O(1) so that checking if a man has proposed to a woman runs in constant time. Linked lists are used to implement the list of unmatched men while the 2D array is used to represent each man's preferences. The same data structures apply to women and their preferences.
 +
 +I found reading the book to be more understandable than the in-class discussion we had on Friday. This isn't to say that the discussion of Friday wasn't helpful, only that it made more sense reading the chapter after we had our discussion. By the end of class on Friday, I felt I had a good grasp on the data types used to implement the algorithm. After reading the chapter, I understood the reasoning behind each data structure. 
 +
 +I would say this chapter was 6 on an interesting scale (seeing as we had already discussed it), but an 8 on the readability scale. Breaking each function down by how it was implemented made a lot of sense to me. While we did do the same thing in class, it was a bit harder to follow simply because erroneous answers kept being proposed which threw me off a little bit.
 +===== Section 2.4 =====
 +Section 2.4 describes the most commonly found algorithmic run-times, which include; O(//n//, O(//n//^//k//) , O(//n//log//n//), O(log//n//). It also discusses some of the largest run-time found, which are O(2^//n//) and O(//n//!). For all of the runtimes, example problems were given and most had examples of the actual algorithms themselves. While I have seen most of the runtimes before, it was good to reinforce my knowledge of them after our in-class discussion.
 +
 +The one algorithm runtime I have always struggled with the most has been O(//n//log//n//). Reading this chapter helped me garner a much better understanding of why this seemingly strange runtime arises so often and how it is implemented in different settings.
 +
 +I would give this section a 6 on the interesting scale and an 8 on the readability scale.
 +===== Section 2.5 =====
 +Section 2.5 introduces the first truly complex data structure, the priority queue. A priority queue is container type that organizes elements based on their size. When values are pulled from the queue, they are taken based on their priority. Lower values have a higher priority. The actual data structure that implements said priority queue is called a heap. A heap can be visualized as a kind of binary array, in which every node has one parent and two children, and the parent node is always greater than both child nodes.
 +
 +The advantages of a heap are that it can add and remove elements in O(log//n//) time, making it more efficient than both lists and arrays. It does this by utilizing the heapify-up and heapify-down functions, both of which we covered in class.
 +
 +This chapter helped to reinforce my understanding of priority queues and their heap-like implementation. I am curious if a priorit queue could be implemented using a different kind of data structure. 
  
 +I give this chapter a 7 on the interesting scale and 7 on the readability scale.
courses/cs211/winter2018/journals/cantrella/chapter_2.1516076791.txt.gz · Last modified: by cantrella
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0