Table of Contents
2.4: Runtimes
This Chapter went into further detail about some common running times, including:
Linear time O(n)
This running time is at most a constant factor times the size of the input. Common problems that have this running time include computing the maximum and merging two sorted lists.
O(nlogn) Time
This is the running time of any algorithm that splits its inputs into two equal-sized pieces, solves each piece recursively, and then combines the two solutions in linear time.
Quadratic Time
Example for this: you're given a certain number of points on a plane, and you want to find which pair is closest together. Each time you measure a line between two points, you can skip out on doing it in the opposite direction.
Cubic Time
From what I can tell, only happens with triple-nested loops, each of which operates at a runtime of O(n). They compound to make the runtime cubic.
O(n^k) Time
We obtain a runtime of O(n^k) for any constant k when we search over all subsets of size k.
Bigger than Polynomial
Things that are order O(n!), such as finding the number of ways to match up n items with n other items, are even more costly than functions of an order like O(n^2)
Smaller than Linear
Binary search algorithm is O(logn). This runtime arises when we're dealing with an algorithm that does a constant amount of work in order to throw away a constant fraction of the inputs.
This chapter was very straightforward. The only algorithms it detailed were to demonstrate runtimes of simple algorithms such as those mentioned above. I don't have any questions about it and I think learning this will just be very straightforward and memorization-focused.
2.5: Priority Queues
This section focused on priority queues. It started with a general introduction of the data structure, which includes elements that each have priority values (keys). This allows us to select the elements in the queue by order of priority. It allows addition, deletion, and the selection of the element with the smallest key in O(logn) time each. The chapter then goes on to introduce the heap, which is a data structure for implementing a priority queue. It is a sort of balanced binary tree with a root and nodes that can each have up to two children. The keys are in “heap order” if the key of any element is at least as large as the key of the element at its parent node. The chapter then goes into heap operations, including heapify-up, which allows us to insert a new element in a heap of n elements in O(logn) time. It looks like…
Heapify-up(H,i):
If i>1 then
let j=parent(i)=[i/2]
If key[H[i]]<key[H[j]] then
swap H[i] and H[j]
Heapify-up(H,j)
Endif
Endif
It also includes heapify-down, using which we can delete a new element in a heap of n elements in O(logn) time. The algorithm for this function can be found on page 63 in the textbook, and seems to be relatively intuitive once you get the hang of heapify-up. Near the end of this section is a list of other operations that we can create using the heap data structure and its heapify-down and heapify-up algorithms.
3.1: Graphs
This section obviously focuses on graphs. The first couple of pages simply lists out common examples of graphs in our everyday life. They seem to be everywhere! Very promising and exciting! Everything discussed about graphs in this section was extremely basic, and relates directly back to the discrete math class I had last semester. All of the terms such as node, path, directionality, and connectivity are defined the same, and there is nothing about them that isn't very intuitive for me. For that reason, I don't see any real definitions or algorithms that are noteworthy about this chapter. It is just a fun and fluffy intro to graphs.
