Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| courses:cs211:winter2018:journals:bowmang:chapter2 [2018/01/30 01:50] – bowmang | courses:cs211:winter2018:journals:bowmang:chapter2 [2018/01/30 03:10] (current) – bowmang | ||
|---|---|---|---|
| Line 31: | Line 31: | ||
| ===== Chapter 2.2 (Asymptotic Order of Growth) ===== | ===== Chapter 2.2 (Asymptotic Order of Growth) ===== | ||
| - | Runtime Bounds | + | __Runtime Bounds__ |
| * No need to get extremely specific in them as that is an exhaustive effort and unnecessary (unless you are under oppressively fine constraints for an algorithm' | * No need to get extremely specific in them as that is an exhaustive effort and unnecessary (unless you are under oppressively fine constraints for an algorithm' | ||
| * By using a certain level of vagueness, similarities between algorithms start to appear. | * By using a certain level of vagueness, similarities between algorithms start to appear. | ||
| - | Asymptotic | + | __Asymptotic |
| * An upper bound exists if the worst-case running time is less than a constant multiple of f(n). | * An upper bound exists if the worst-case running time is less than a constant multiple of f(n). | ||
| - | Asymptotic | + | __Asymptotic |
| * A lower bound exists if the best-case running time is more than a constant multiple of f(n) for a sufficiently large size N. (necessary to prove scaleability, | * A lower bound exists if the best-case running time is more than a constant multiple of f(n) for a sufficiently large size N. (necessary to prove scaleability, | ||
| - | Asymptotic | + | __Asymptotic |
| * If we can show both an asymptotic upper and lower bound are the same as the run time then we can say we have found the right bounds. | * If we can show both an asymptotic upper and lower bound are the same as the run time then we can say we have found the right bounds. | ||
| * These tight bounds are good to find for worst-case runtime scenarios as they show the runtime precisely. | * These tight bounds are good to find for worst-case runtime scenarios as they show the runtime precisely. | ||
| - | Properties | + | __Properties |
| * Transitivity: | * Transitivity: | ||
| * Sums of Functions: if an upper bound applies to both function F and function G individually then it applies to the sum of the functions F and G. | * Sums of Functions: if an upper bound applies to both function F and function G individually then it applies to the sum of the functions F and G. | ||
| - | Asymptotic | + | __Asymptotic |
| * Polynomials | * Polynomials | ||
| * If a polynomial is to degree D then the runtime is O(n^D). | * If a polynomial is to degree D then the runtime is O(n^D). | ||
| Line 61: | Line 61: | ||
| - | ===== Chapter 2.4 (A Survey of Common Running Times) ===== | + | ===== Chapter 2.4 (Common Running Times) ===== |
| - | Linear | + | __Linear |
| * A linear running time means that a function will grow at most at a constant rate times the input size (n). | * A linear running time means that a function will grow at most at a constant rate times the input size (n). | ||
| * Computing the Maximum: | * Computing the Maximum: | ||
| Line 70: | Line 70: | ||
| * Have pointers directed to the head of each sorted list. Then just see which item is smaller and append that to the output list. Move the pointer from that list along to the next item and repeat this process until one list is empty. Then append the rest of the other list to the output list and voila. | * Have pointers directed to the head of each sorted list. Then just see which item is smaller and append that to the output list. Move the pointer from that list along to the next item and repeat this process until one list is empty. Then append the rest of the other list to the output list and voila. | ||
| - | O(n logn) Time | + | __O(n logn) Time__ |
| * O(n logn) is the running time of any algorithm that splits its input into two equal halves, solves each piece recursively, | * O(n logn) is the running time of any algorithm that splits its input into two equal halves, solves each piece recursively, | ||
| * Sorting (ex. merge sort) is the best known example of this problem. | * Sorting (ex. merge sort) is the best known example of this problem. | ||
| * Merge sort works by splitting the input into two equally-sized halves. The two halves are then sorted recursively and then merged into a single sorted output list. | * Merge sort works by splitting the input into two equally-sized halves. The two halves are then sorted recursively and then merged into a single sorted output list. | ||
| - | Quadratic | + | __Quadratic |
| * An example of quadratic running time is finding the smallest distance between two points on an n x n grid. | * An example of quadratic running time is finding the smallest distance between two points on an n x n grid. | ||
| * First you can compute every single pair of points possible (x,y) by iterating two for loops of n. | * First you can compute every single pair of points possible (x,y) by iterating two for loops of n. | ||
| Line 81: | Line 81: | ||
| * Return the minimum when the the for loops are over. | * Return the minimum when the the for loops are over. | ||
| - | Cubic Time - O(n< | + | __Cubic |
| * An example of cubic running time is when you have multiple sets which are each subsets of set n. You want to check if the sets are disjointed (have no elements in common) | * An example of cubic running time is when you have multiple sets which are each subsets of set n. You want to check if the sets are disjointed (have no elements in common) | ||
| * For set S1: | * For set S1: | ||
| Line 89: | Line 89: | ||
| * Each set can only be O(n) so the three for loops results in O(n< | * Each set can only be O(n) so the three for loops results in O(n< | ||
| - | O(n< | + | __O(n< |
| - | * An example of when O(n< | + | * An example of when O(n< |
| - | * FINISH LATER | + | * For some k, find if there exists a any independent sets in graph G. |
| + | * Enumerate all subsets of k nodes - this will run in O(n< | ||
| + | * Check if any edge in each subset S joins two subsets together. - this will run in constant time for each subset so it won't be included in the overall running time | ||
| + | * No edges joining a subset to any other subset means it's an independent graph. | ||
| - | Beyond | + | __Beyond |
| * There are countless other common running times that you will come across. These include things like O(2< | * There are countless other common running times that you will come across. These include things like O(2< | ||
| * O(2< | * O(2< | ||
| Line 103: | Line 106: | ||
| * n! is the number of ways to arrange n items in order (ex. traveling salesman problem). | * n! is the number of ways to arrange n items in order (ex. traveling salesman problem). | ||
| - | Sublinear Time | + | __Sublinear Time__ |
| * While it only happens rarely, running times can be asymptotically smaller than linear. | * While it only happens rarely, running times can be asymptotically smaller than linear. | ||
| * This usually occurs when input can be " | * This usually occurs when input can be " | ||
| Line 111: | Line 114: | ||
| * The O(logn) run time only applies when it takes a constant amount of work to remove half the input each time from the search. | * The O(logn) run time only applies when it takes a constant amount of work to remove half the input each time from the search. | ||
| - | ===== Chapter 2.5 (A More Complex Data Structure: | + | ===== Chapter 2.5 (Priority Queues |
| - | The Problem | + | __The Problem__ |
| * Priority queues are designed for applications in which elements have a priority value (or key) and we want to select an element which has the highest priority. | * Priority queues are designed for applications in which elements have a priority value (or key) and we want to select an element which has the highest priority. | ||
| * Smaller keys represents higher priorities. | * Smaller keys represents higher priorities. | ||
| Line 123: | Line 126: | ||
| * Sorting = O(n logn) | * Sorting = O(n logn) | ||
| - | Implementation | + | __Implementation |
| * Heaps are like balanced binary trees with a root and each node having potentially one left and one right child. | * Heaps are like balanced binary trees with a root and each node having potentially one left and one right child. | ||
| * Keys (items) are in heap order if the key is equal or larger than its parent node in the tree (keys grow as you go down the tree essentially). | * Keys (items) are in heap order if the key is equal or larger than its parent node in the tree (keys grow as you go down the tree essentially). | ||
| Line 142: | Line 145: | ||
| * ExtractMin(H) = O(logn) | * ExtractMin(H) = O(logn) | ||
| * Deletes the minimum key value in the heap which uses both FindMin(H) and Delete(H,i) hence the O(logn) running time. | * Deletes the minimum key value in the heap which uses both FindMin(H) and Delete(H,i) hence the O(logn) running time. | ||
| + | * By maintaining a separate position array we can also: | ||
| + | * Delete(H, Position[v]) = O(logn) | ||
| + | * ChangeKey(H, | ||
| * Heapify-Up | * Heapify-Up | ||
| * Push the too small key upwards in the tree by switching it with its parent node. | * Push the too small key upwards in the tree by switching it with its parent node. | ||
