Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
courses:cs211:winter2012:journals:mike:home [2012/01/18 04:11] whitem12courses:cs211:winter2012:journals:mike:home [2012/03/01 04:56] – [Section 1: Mergesort] whitem12
Line 32: Line 32:
 ==== Asymptotic Order of Growth ==== ==== Asymptotic Order of Growth ====
 When discussing algorithm complexity it is most important to discuss how the algorithm handles number of instructions relative to the amount of data/information one handles.  This is represented by functions.  Big O notation is the Asymptotic upper bounds, such that if a function f has O(g(n)), then there exists a c in N such that c*N > f for all n in N.  Omega (Ω) notation is the asymptotic lower bounds of the function.  If f~O(g(n))~Ω(g(n)), then we say that g(n) is the asymptotically tight bound for f(n), or rather f~Θ(g(n)).  This means that the difference between the best case and worst case is not steep.  All one needs to understand asymptotic orders and compair them is a basic understanding of how different functions grow, and at which rates they grow. When discussing algorithm complexity it is most important to discuss how the algorithm handles number of instructions relative to the amount of data/information one handles.  This is represented by functions.  Big O notation is the Asymptotic upper bounds, such that if a function f has O(g(n)), then there exists a c in N such that c*N > f for all n in N.  Omega (Ω) notation is the asymptotic lower bounds of the function.  If f~O(g(n))~Ω(g(n)), then we say that g(n) is the asymptotically tight bound for f(n), or rather f~Θ(g(n)).  This means that the difference between the best case and worst case is not steep.  All one needs to understand asymptotic orders and compair them is a basic understanding of how different functions grow, and at which rates they grow.
 +
 +===== Chapter 4 =====
 +==== Section 7: Clustering ====
 +=== The Problem ===
 +== Maximum Spacing ==
 +===Design of the Algorithm===
 +=== Analyzing the Algorithm ===
 +====Section 8: Huffman Codes and Data Compression ====
 +===The Problem===
 +==Variable-Length Encoding Schemes==
 +==Prefix Codes==
 +==Optimal Prefix Codes==
 +===Design===
 +==Prefix Codes as Binary Trees==
 +==Top-Down Approach==
 +===Analyzing===
 +
 +===== Chapter 5 =====
 +Divide and Conquer
 +====Section 1: Mergesort====
 +Idea: Divide a list in half until you have sorted lists (all single entries) and then merge them back together as a sorted list such that the final list is a sorted version of the initial list.  Takes n steps to break down and n*log(n) steps to merge them back together.
 +=== Approaches ===
 +Two basic ways:
 +==Unrolling the Mergesort Recurrence==
 +Analyze the first few layers and generalize from there to unroll how the recursion takes place. 
 +==Substituting a Solution into the Mergesort Recurrence==
 +Guess what the solution is and then check it (this is good for merge sort since we know it takes n*log(n).
 +==Partial Substitution==
 +So if you don't know the constant then you can assume it exists and then check to make sure that it turns out right in the end and then work it back up to find the actual constant involved.
 +
  
  
courses/cs211/winter2012/journals/mike/home.txt · Last modified: 2012/03/28 04:00 by whitem12
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0