Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| courses:cs211:winter2018:journals:bowmang:chapter2 [2018/01/29 20:08] – [Chapter 2.2 (Asymptotic Order of Growth)] bowmang | courses:cs211:winter2018:journals:bowmang:chapter2 [2018/01/30 03:10] (current) – bowmang | ||
|---|---|---|---|
| Line 31: | Line 31: | ||
| ===== Chapter 2.2 (Asymptotic Order of Growth) ===== | ===== Chapter 2.2 (Asymptotic Order of Growth) ===== | ||
| - | Runtime Bounds | + | __Runtime Bounds__ |
| * No need to get extremely specific in them as that is an exhaustive effort and unnecessary (unless you are under oppressively fine constraints for an algorithm' | * No need to get extremely specific in them as that is an exhaustive effort and unnecessary (unless you are under oppressively fine constraints for an algorithm' | ||
| * By using a certain level of vagueness, similarities between algorithms start to appear. | * By using a certain level of vagueness, similarities between algorithms start to appear. | ||
| - | Asymptotic | + | __Asymptotic |
| * An upper bound exists if the worst-case running time is less than a constant multiple of f(n). | * An upper bound exists if the worst-case running time is less than a constant multiple of f(n). | ||
| - | Asymptotic | + | __Asymptotic |
| * A lower bound exists if the best-case running time is more than a constant multiple of f(n) for a sufficiently large size N. (necessary to prove scaleability, | * A lower bound exists if the best-case running time is more than a constant multiple of f(n) for a sufficiently large size N. (necessary to prove scaleability, | ||
| - | Asymptotic | + | __Asymptotic |
| * If we can show both an asymptotic upper and lower bound are the same as the run time then we can say we have found the right bounds. | * If we can show both an asymptotic upper and lower bound are the same as the run time then we can say we have found the right bounds. | ||
| * These tight bounds are good to find for worst-case runtime scenarios as they show the runtime precisely. | * These tight bounds are good to find for worst-case runtime scenarios as they show the runtime precisely. | ||
| - | Properties | + | __Properties |
| * Transitivity: | * Transitivity: | ||
| * Sums of Functions: if an upper bound applies to both function F and function G individually then it applies to the sum of the functions F and G. | * Sums of Functions: if an upper bound applies to both function F and function G individually then it applies to the sum of the functions F and G. | ||
| - | Asymptotic | + | __Asymptotic |
| * Polynomials | * Polynomials | ||
| * If a polynomial is to degree D then the runtime is O(n^D). | * If a polynomial is to degree D then the runtime is O(n^D). | ||
| Line 61: | Line 61: | ||
| - | ===== Chapter 2.4 (A Survey of Common Running Times) ===== | + | ===== Chapter 2.4 (Common Running Times) ===== |
| - | Linear | + | __Linear |
| * A linear running time means that a function will grow at most at a constant rate times the input size (n). | * A linear running time means that a function will grow at most at a constant rate times the input size (n). | ||
| * Computing the Maximum: | * Computing the Maximum: | ||
| * Computing the maximum number in a list can be done with a single pass through of the list. By holding the current max value in a variable, you just need to update it as necessary if you find a larger value as you iterate through the list. | * Computing the maximum number in a list can be done with a single pass through of the list. By holding the current max value in a variable, you just need to update it as necessary if you find a larger value as you iterate through the list. | ||
| * Merging Two Sorted Lists: | * Merging Two Sorted Lists: | ||
| - | * | + | * Have pointers directed to the head of each sorted list. Then just see which item is smaller and append that to the output list. Move the pointer from that list along to the next item and repeat this process until one list is empty. Then append the rest of the other list to the output list and voila. |
| + | __O(n logn) Time__ | ||
| + | * O(n logn) is the running time of any algorithm that splits its input into two equal halves, solves each piece recursively, | ||
| + | * Sorting (ex. merge sort) is the best known example of this problem. | ||
| + | * Merge sort works by splitting the input into two equally-sized halves. The two halves are then sorted recursively and then merged into a single sorted output list. | ||
| + | |||
| + | __Quadratic Time - O(n< | ||
| + | * An example of quadratic running time is finding the smallest distance between two points on an n x n grid. | ||
| + | * First you can compute every single pair of points possible (x,y) by iterating two for loops of n. | ||
| + | * Then you can compute the distance by using the constant run time distance formula (no need to write it out). If the distance is smaller than the minimum value you already have, replace it. | ||
| + | * Return the minimum when the the for loops are over. | ||
| + | |||
| + | __Cubic Time - O(n< | ||
| + | * An example of cubic running time is when you have multiple sets which are each subsets of set n. You want to check if the sets are disjointed (have no elements in common) | ||
| + | * For set S1: | ||
| + | * For set S2: | ||
| + | * For each item in S1: | ||
| + | * Determine if the item is also in S2 | ||
| + | * Each set can only be O(n) so the three for loops results in O(n< | ||
| + | |||
| + | __O(n< | ||
| + | * An example of when O(n< | ||
| + | * For some k, find if there exists a any independent sets in graph G. | ||
| + | * Enumerate all subsets of k nodes - this will run in O(n< | ||
| + | * Check if any edge in each subset S joins two subsets together. - this will run in constant time for each subset so it won't be included in the overall running time | ||
| + | * No edges joining a subset to any other subset means it's an independent graph. | ||
| + | |||
| + | __Beyond Polynomial Time__ | ||
| + | * There are countless other common running times that you will come across. These include things like O(2< | ||
| + | * O(2< | ||
| + | * Arises naturally as a running time for a search algorithm that must consider all subsets. | ||
| + | * O(n!) Time | ||
| + | * Grows more rapidly than O(2< | ||
| + | * Arises for two reasons often: | ||
| + | * n! is the number of ways to match up n items with n other items (ex. brute force stable matching problem). | ||
| + | * n! is the number of ways to arrange n items in order (ex. traveling salesman problem). | ||
| + | |||
| + | __Sublinear Time__ | ||
| + | * While it only happens rarely, running times can be asymptotically smaller than linear. | ||
| + | * This usually occurs when input can be " | ||
| + | * Binary Search - O(logn) | ||
| + | * By nature binary search only works on sorted arrays so we can take advantage of that and try halfway points in the array until we are able to pinpoint the item. | ||
| + | * This removes half of the array that you are searching effectively with each recursive step that happens. | ||
| + | * The O(logn) run time only applies when it takes a constant amount of work to remove half the input each time from the search. | ||
| + | |||
| + | ===== Chapter 2.5 (Priority Queues - Heaps) ===== | ||
| + | |||
| + | __The Problem__ | ||
| + | * Priority queues are designed for applications in which elements have a priority value (or key) and we want to select an element which has the highest priority. | ||
| + | * Smaller keys represents higher priorities. | ||
| + | * Useful for situations which manage real-time events (ex. scheduling processes on a computer). | ||
| + | * Running time: | ||
| + | * Adding = O(logn) | ||
| + | * Removing = O(logn) | ||
| + | * Selecting = O(logn) | ||
| + | * Sorting = O(n logn) | ||
| + | |||
| + | __Implementation of a Priority Queue (Heap)__ | ||
| + | * Heaps are like balanced binary trees with a root and each node having potentially one left and one right child. | ||
| + | * Keys (items) are in heap order if the key is equal or larger than its parent node in the tree (keys grow as you go down the tree essentially). | ||
| + | * The heap will be stored in an array rather than using pointers: | ||
| + | * Root = first item in the list | ||
| + | * Left child(i) = List[2i] | ||
| + | * Right child(i) = List[2i + 1] | ||
| + | * Parent(i) = List[i/2] | ||
| + | * Operation Running Times: | ||
| + | * StartHeap(n) = O(n) | ||
| + | * returns initialized empty heap H in the form of an array hence the linear running time. | ||
| + | * FindMin(H) = O(1) | ||
| + | * returns the first item in the array which is the smallest key in the heap. | ||
| + | * Insert(H, v) = O(logn) | ||
| + | * Uses " | ||
| + | * Delete(H, i) = O(logn) | ||
| + | * Uses either " | ||
| + | * ExtractMin(H) = O(logn) | ||
| + | * Deletes the minimum key value in the heap which uses both FindMin(H) and Delete(H,i) hence the O(logn) running time. | ||
| + | * By maintaining a separate position array we can also: | ||
| + | * Delete(H, Position[v]) = O(logn) | ||
| + | * ChangeKey(H, | ||
| + | * Heapify-Up | ||
| + | * Push the too small key upwards in the tree by switching it with its parent node. | ||
| + | * Run Heapify-Up on the parent now (on itself again effectively). | ||
| + | * Heapify-Down | ||
| + | * Push the too large key downwards in the tree by switching it with one of its children nodes. | ||
| + | * Run Heapify-Down on the child node now that the too large key switched with. (on itself again effectively). | ||
