Chapter 2: Basics of Algorithm Analysis

2.1 Computational Tractability

This book puts a lot of emphasis on explaining their approaches to problems. I definitely think this is beneficial because it helps me see the overall goals and processes. This section starts with a basic definition of efficiency, simply that an algorithm is efficient if it runs quickly on real input instances. This definition lacks specifics and a way to quantify how to determine how fast is fast enough. Our final definition of efficiency needs to be independent of the platform and the size of the input set. This section also introduces the concept of worst-case running time as a measure of efficiency. Worst-case measurement is good because it is simpler to find and prove than average-case and more accurate overall than a random test set. A possible better definition of efficiency is that an algorithm is efficient if it is better than brute force search, which looks at every possibility(of perfect matches, for example) to find a stable one. Finally, this section looks at polynomial time, which is the best way to classify an efficient algorithm. The authors make the point that not all polynomial algorithms are faster than all exponential algorithms; it really depends on the practical situation, but it is a good abstraction of most situations.

2.2 Asymptotic Order of Growth

This sections starts by discussing concepts we learned in CS112 about asymptotic order. However, the use of Θ and Ω are new to me. The O(n) notation that I am used to is used for the asymptotic upper bound. Note that there can be several upper bounds, but the best one to use is the tightest, or least upper bound because it can give a more accurate representation of the actual running time of an algorithm. The tight bound is expressed using Θ(n) and the lower bound is represented by Ω(n). All three of these have the following properties: transitivity, composition/sum of functions. The three most common functions in asymptotic analysis are polynomial, logarithmic, and exponential. Logarithmic function grow slower than polynomials which grow slower than exponential functions.

Because I missed this lecture in class, this section was very helpful in clearing up the differences between O, Θ and Ω and their respective uses, but for the most part, this section was a lot of review.

2.3 Implementing Stable Matching Problem Using Lists and Arrays

This section discusses the uses of list and arrays in implementing the Stable Matching Problem discusses earlier. To accurately compare when to use a list vs. an array, we need to know the running times of basic array and list actions and their trade-offs. Some trade-offs to note are as follows:

- An array had constant access time for the ith element, but is not good for dynamically maintaining a list of elements that changes.

- Lists are the opposite, with O(i) time to find an element i, but they are good for dynamically changing sets.

- We can also convert between array and list representation in O(n) time.

Since we want the algorithm to run in n^2 time, each iteration must be in constant time. The preference lists are implemented with an array. There are several lists that contain information about engagements, free men, and a man's next proposal. These all contain elements that will change throughout the algorithm. These data structures allow the G-S algorithm to run in O9n^2) time.

I thought this explanation was a little bit more confusing than when we discussed it in class, mostly because there is not a good way to write the explanation down that is not wordy and has lots of variables. I probably would have been really confused if I had read this section before class.

2.4 A Survey of Common Running Times

The goal of this section is to help us have a better understanding of common running times and what their typical algorithms might look like. They start with the running time of the brute-force algorithm, with the goal of being able to improve the algorithm.

A linear algorithm has a running time of at most c*(size of input). Examples of this are computing the maximum of n numbers and merging two sorted lists. O(nlogn) is the running time of many common algorithms, such as mergesort. They all have the property that the algorithm splits the input in half, solves the halves, then puts the solutions back together. Algorithms with nested loops or ones that work on pairs of things(points, for example) generally run in quadratic time. Cubic algorithms are similar to quadratic ones, except they would have three nested loops or points in 3-D space. An example of an O(n^k) algorithm is finding independent sets of size k in a graph with n nodes. This algorithm would enumerate over all sets of k nodes and decide whether it is independent. Similar to this, to find the independent set with the maximum size is O(2^n). This generalizes to search algorithms where we must look at all subsets. n! time usually comes up when we are trying to find all ways to arrange n elements, where order matters (i.e. Travelling Salesman Problem). The best-known example of a sublinear algorithm is binary search, where the input does not need to be completely read in to start.

Starting on page 48, there are many algorithms in this section. Most of them are ones I have seen before, so while this was not a hard section, it is a good reminder of important algorithms and running times.

2.5 A More Complex Data Structure: Priority Queues

This section discusses how to implement a priority queue efficiently, using a heap. I learned this in CS112, but I like the different perspective of this book. It constantly reminds us of the usefulness of priority queues and how they can be used to efficiently solve problems.

Priority queues are incredibly useful for implementing an algorithm to schedule real-time events where each event has a ranking, or priority. If we use a list to implement a priority queue, we will have linear time for either add or delete, but our goal in to achieve O(logn). Our solution is to use a heap, which looks like a balanced binary tree, such that the priority of every parent is bigger than or equal to its children. There are picture and examples on page 60. Representing a heap as a array where an element i has children at 2i and 2i+1 is fairly simple and provide O(1) time to find the minimum element, since it is at the root. The add and delete operations require us to Heapify-up or Heapify-down to fix the heap after we add or remove an element. These algorithms are on page 61 and 63. These both fix our heap in O(logi) time, as desired.

courses/cs211/winter2011/journals/anna/chapter_2.txt · Last modified: 2011/02/04 16:54 by poblettsa
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0