Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| courses:cs211:winter2018:journals:shermanc:chapter2 [2018/01/23 00:51] – shermanc | courses:cs211:winter2018:journals:shermanc:chapter2 [2018/01/30 02:49] (current) – shermanc | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ====== Chapter 2: Basics of Algorithm Analysis ====== | ====== Chapter 2: Basics of Algorithm Analysis ====== | ||
| + | |||
| + | This chapter will attempt to serve as an introduction on understanding the resources required for run times and space usage of different algorithms and how they compare to one another. | ||
| Line 14: | Line 16: | ||
| ===== 2.3: Implementing the Stable Matching Algorithm Using Lists and Arrays ===== | ===== 2.3: Implementing the Stable Matching Algorithm Using Lists and Arrays ===== | ||
| + | In this section we analyzed the pros and cons to implementing the stable matching problem using lists and arrays. | ||
| + | |||
| + | The simplest structure that we could use is the array, because it is very easy to implement in pretty much every coding language. | ||
| + | |||
| + | The more efficient and preferable structure to use in our case would be a (doubly) linked list. This is because it uses pointers that allow us to directly insert into the front and back of the list constant time. But, we can not find an element in constant time, but rather in O(i), because we have to iterate through until the desired value is found. | ||
| + | |||
| + | I thought it was very interesting how they said that in order to implement the GS Algorithm in quadratic time, we must be able to implement it in constant time. I also wonder why a stack would not be a good implementation for the preference lists, as I feel like being able to just pop off the preferred person (for the men) would be relatively quick and simple. | ||
| + | |||
| + | The fact that we can make step 4 of the GS Algorithm (find out of m or m' is preferred by w) work in constant time is pretty cool to me. At face value, I would have thought it impossible, but the nxn array is a pretty useful structure that we put to use, and this whole section cleared up a lot of what I was still not 100% sure about after the discussion in class. I would give it a 6 of 10 for readability, | ||
| + | |||
| + | |||
| + | ===== 2.4: A Survey of Common Running Times ===== | ||
| + | |||
| + | The goal of this section was to take a look at the most common run-time bounds and the ways that we interpret them. It started off by talking about the " | ||
| + | |||
| + | For O(n), we will usually process the input in a single pass, as the book tells us. The example provided is finding the max of a list of numbers, so we iterate through the list, checking each number and storing the highest one until the list is finished. | ||
| + | |||
| + | Next is O(nlogn) time. As the book describes it, this is predominately involved in algorithms that split the input into two equal-sized pieces, solves each piece recursively and then combines the two solutions in linear time. The area we most commonly see this run time is in sorting algorithms. | ||
| + | |||
| + | In quadratic run time, we usually have two sets of input that are each iterated through n times, giving us O(n^2). | ||
| + | |||
| + | Cubic time is not as widely seen, but is definitely an important run time. This can involve a triple nested loop, which is highly uncommon but can be practical. | ||
| + | |||
| + | Polynomial time describes algorithms that work over subsets of size k. The example given here is finding independent sets of a graph. | ||
| + | |||
| + | Looking beyond polynomial time, we have run times of exponential and even factorial, but these are highly inefficient and must be avoided at all costs. | ||
| + | |||
| + | Lastly, we have sublinear time, which would be the best run times we as programmers could ask for, though they are difficult to achieve. | ||
| + | |||
| + | After reading this section, I'd still like to see some other examples of O(nlogn), O(n^k), O(2^n), and O(logn) algorithms, although they all make a lot more since, especially O(n^k) and O(logn). | ||
| + | |||
| + | Overall readability I'd say was a 9/10 because it was great review for the simpler run times I already knew and good explanations for the ones I was less familiar with. I would have liked to see an example for each run time though, so I could not give it the full 10/10. | ||
| + | |||
| + | |||
| + | ===== 2.5: A More Complex Data Structure: Priority Queues ===== | ||
| + | |||
| + | The priority queue is a widely useful and efficient way of sorting values. | ||
| + | |||
| + | In a heap, we have a tree where the lowest keys are at the top (index 1) and every key below is greater than or equal to the key above (index = 2i for left child and 2i + 1 for right child). | ||
| + | |||
| + | The procedure described to add elements to a heap is called " | ||
| + | If we delete an element in a heap, the hold must be filled. | ||
| + | This section as a whole seemed more of a review, as I felt as if I had a pretty good grasp on it after we covered it in class. | ||
