Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
courses:cs211:winter2018:journals:hornsbym:chapter_2 [2018/01/30 03:31] – [Section 2.4(Common Running Times)] hornsbymcourses:cs211:winter2018:journals:hornsbym:chapter_2 [2018/01/30 04:21] (current) hornsbym
Line 46: Line 46:
 Next, the section moves on to running times beyond polynomial. These running times are too inefficient to handle any significant amount of data. An example of this is exponential running time, which happens when we have to iterate over every subset of a a data set. This causes us to have to perform O(2<sup>n</sup>) operations.\\ Next, the section moves on to running times beyond polynomial. These running times are too inefficient to handle any significant amount of data. An example of this is exponential running time, which happens when we have to iterate over every subset of a a data set. This causes us to have to perform O(2<sup>n</sup>) operations.\\
 \\ \\
-Section 2.4 ends with a brief discussion of sub-linear running times, most notably logarithmic time (O(log//n//)). This running time happens when you are able to reduce the amount of data by a factor of two each time you iterate through the data. Binomial search runs in logarithmic time.+Section 2.4 ends with a brief discussion of sub-linear running times, most notably logarithmic time (O(log//n//)). This running time happens when you are able to reduce the amount of data by a factor of two each time you iterate through the data. Binomial search runs in logarithmic time.\\ 
 +\\ 
 +===== Section 2.5(Priority Queues)===== 
 +This section mainly discusses the priority queue and its implementation.\\ 
 +\\ 
 +The optimal data structure for implementing a priority queue, according to this section, is a //heap//. Heaps are tree structures in which each node can have two children, each one being equal or higher in value. Because of this form, we can access the lowest value in constant time because the root node, by definition, is the lowest value.\\ 
 +\\ 
 +Throughout the process of working with the heap, its structure will inevitably become compromised from either adding an item of a lower value to a leaf node or by removing an item from higher in the tree. We fix these issues with the heapify-up and heapify-down methods. Heapify-up works by switching the problem node up towards the root node, until it reaches a point where it is no longer lower than its parent node. This is primarily used for reorganizing a heap in which a low number has been added to the frontier. Heapify-down works by doing the opposite; we switch a parent node with the lower of its two children, and repeat the process until either the node is higher than both of its new children or it becomes a leaf node.\\ 
 +\\ 
 +Both of these methods run in O(log//n//) time. This is possible because of the heaps structure; since each layer of the heap grows twice as large as the previous one, the structure of the heap itself becomes log//n// relative to the entire number of nodes. So when we traverse up or down the entire depth of the tree, we can only traverse log//n// times.\\ 
 +\\ 
 +Other than when the heap is initialized, every heap method runs in sub-linear time.
courses/cs211/winter2018/journals/hornsbym/chapter_2.1517283061.txt.gz · Last modified: by hornsbym
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0