Chapter 2

2.1 Computational Tractability

  • Comparison of algorithms makes necessary some measure of efficiency. This must be abstract for convenience in comparison, but should still be applicable to normal real-world cases. The best known general-purpose measure is the worst-case running time, as average case has its uses but is rarely well-defined. The natural comparison is brute-force search; since brute-force typically takes exponential time, efficiency can be defined as the next-smallest order of growth, polynomial running time. There may be cases in which great disparities in coefficients and small sample sizes can make exponential time attractive or polynomial time impractical, but polynomial time is useful in most circumstances.
  • No insights.
  • Is there always an easy brute-force algorithm? I was recently working on an algorithm for laying out nodes in relational graphs, and do not see any brute-force solution (I wound up using a force-directed technique). Finding a measure of goodness is somewhat straightforward, but how does one come up with candidates? (Although I suppose that one could say that the search space is the fantastically large set of layouts unique at the display resolution. But what about truly continuous problems?).
  • I would have much preferred a direct approach; I only see as significant in this section the analysis of brute-force bounds and the definition of polynomial time as a standard of efficiency. That said, I have already seen this before, and I think that the authors did do a good job of leading readers toward the answer. 8/10.

2.2 Asymptotic Order of Growth

  • Mere use of upper bounds is insufficiently granular, ensuring that the equivalency classes are each sparsely populated, and misleading, in counting individual steps makes efficiency analyses overly implementation-sensitive. Therefore, we ignore linear coefficients and slowly-growing terms, giving asymptotic bounds (upper, lower, and tight). These are transitive, but addition leaves only the greater order and multiplication by a constant is an identity.
  • No insights.
  • No questions.
  • Well done; I think that the level of proof/explication was well chosen. 9/10.

2.3 Implementing the Stable Matching Algorithm

  • Computational complexity of an algorithm depends on the data structures; the analytical complexity can only be completed if every step is constant complexity (and meeting this goal may require preprocessing, converting data to structures more convenient for the operations needed. Arrays have constant lookup and alteration of arbitrary elements, but complex insertion; lists have poor lookup (except at the head), but efficient alteration.
  • No insights.
  • I am still somewhat bothered by the suggestion of O(1) alteration of lists; it should be O(i), like the lookup, even for a mutable list. That said, for Stable Matching we are only using lists as queues.
  • I like the combination of the introduction of the data structures with putting them to use, but doing all the description of the structures first somewhat ruins the effect. Well-written anyway. 8/10.

A Survey of Common Running Times

  • Knowing common runtimes is important, both for knowing what to expect to be able to pare an algorithm to and knowing the complexity of various operations within an operation. All operation requiring some processing of each part of the input is at least linear; things that can be done in a single pass (of constant-time steps) are tightly linear. Divide-and-recurse algorithms often come to O(n log n), so it is common for sorts. Polynomial times with higher exponents often arise from looking at sets of sets, or all subsets of a set of n elements. Exponential time arises typically from brute-force algorithms trying all possibilities, and sub-linear time only when one can use prior knowledge (such as that an array is sorted) to only look over part of the input.
  • No Insights.
  • No Questions.
  • Fairly clear, although going by increasing runtime and then concluding with sublinear was rather jarring (knowing that there are sublinear algorithms, I spent most of the section wondering when I would see them. 7/10.

Priority Queues

  • A common problem is keeping a set of things and retrieving the maximum (or minimum), but without random access. A heap is a structure of nodes, each branching to 0-2 other nodes; it is in heap order if each node has key at least as large as its parent). They can be intuitively implemented as trees, but also as arrays if the size is bounded. Since balanced trees have depth log n, most heap operations have complexity O(log n).
  • I am still bothered by the approach to deleting the root by sending the larger child to the bottom, when it is quite likely that it will need to be moved back near the top, although I cannot improve on it.
  • What is the running time on an operation that recursively operates on each node of a tree (and then merges results in constant time)? It seems that it should be O(n), since there are n nodes and it must hit each once, but it also bears some similarity to the O(n log n) recursive sorts (such as mergesort).
  • Well-written. 8/10.
courses/cs211/winter2012/journals/ian/chapter2.txt · Last modified: 2012/01/24 21:54 by sturdyi
CC Attribution-Noncommercial-Share Alike 4.0 International
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0