Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
courses:cs211:winter2011:journals:david:chapter2 [2011/01/19 02:10] โ [2.2 - Asymptotic Order of Growth] margoliesd | courses:cs211:winter2011:journals:david:chapter2 [2011/01/25 20:53] (current) โ [2.5 - A More Complex Data Structure: Priority Queues] margoliesd | ||
---|---|---|---|
Line 6: | Line 6: | ||
====2.2 - Asymptotic Order of Growth==== | ====2.2 - Asymptotic Order of Growth==== | ||
The worst-case performance of an algorithm can be expressed as a function of the size of the inputs. The algorithms in the book will be written in pseudo-code so that when we count the number of steps, we are not actually counting the number of operations on the CPU. I think that could be architecture-dependent. The upper bound of of the running time of an algorithm is a constant multiple of its highest-order term (or an even higher order term). The lower bound of the running time of an algorithm is a constant multiple of its higher-order term or one of lower order. A tight bound occurs when we choose the highest order term of the polynomial, instead of a higher order for an upper bound or a lower order for a lower bound. | The worst-case performance of an algorithm can be expressed as a function of the size of the inputs. The algorithms in the book will be written in pseudo-code so that when we count the number of steps, we are not actually counting the number of operations on the CPU. I think that could be architecture-dependent. The upper bound of of the running time of an algorithm is a constant multiple of its highest-order term (or an even higher order term). The lower bound of the running time of an algorithm is a constant multiple of its higher-order term or one of lower order. A tight bound occurs when we choose the highest order term of the polynomial, instead of a higher order for an upper bound or a lower order for a lower bound. | ||
+ | |||
+ | ====2.3 - Implementing the Stable Matching Algorithm Using Lists and Arrays==== | ||
+ | The implementation of an algorithm is dependent on the choice of data structures. Arrays have fast query times, that is, it is O(1) to find the i< | ||
+ | |||
+ | The book gives a good implementation for the Stable Matching Problem. We can have two arrays, one for all men and the other for all women. We can store each person' | ||
+ | |||
+ | I give this section a readability of 8/10. | ||
+ | |||
+ | ====2.4 - A Survey of Common Running Times==== | ||
+ | The search space contains all possible solutions to a problem and a brute force search simply looks through all of these solutions to find the one that matches the problem' | ||
+ | |||
+ | I give this section a readability of 5/10. | ||
+ | |||
+ | ====2.5 - A More Complex Data Structure: Priority Queues==== | ||
+ | |||
+ | An efficient algorithm can be improved by using a different kind of data structure. In a priority queue, all elements have an associated key or value, and the lower the value, the higher the priority. Operating systems use priority queues to determine which process gets to use the CPU. In a priority queue, an element can be added, deleted, and the highest priority item can be selected in O(logN) time. Priority queues can be implemented using a heap, which is visually like a " | ||
+ | |||
+ | I give this section a readability of 7/10. |