This is an old revision of the document!
2.1 Computational Tractability
Efficiency of algorithms is defined in the terms of the running time and the algorithms must be efficient in their use of other resources as well, especially the amount of space(or memory) they use. To analyze algorithms, we focus on the worst case running time where we look for a bound on the largest possible running time the algorithm could have over all inputs of a given size N, and see how it scales with N. The worst-case analysis is the best used when analyzing algorithms. An efficient algorithm is an algorithm that is platform-independent, instance-independent and has predictive value with respect to increasing input sizes. Such an algorithm must achieve qualitatively better worst-case performance at an analytical level than brute-force search. Let us suppose that for a given algorithm, there are absolute constants c > 0 and d >0 so that on every input of size N, its running time is bounded by cNd primitive computational steps.This means that its running time is at most proportional to Nd. If this running-time bound holds, for some c and d,then the algorithm has a polynomial running time and is called a polynomial-time algorithm. We define an efficient algorithm as an algorithm that has a polynomial running time. This concrete mathematical definition of an efficient algorithm has turned out to correspond in practice to what we observe about the efficiency of algorithms, and the tractability of problems, in real life. One further reason why the mathematical definition and the empirical evidence seem to line up well in the case of polynomial-time solvability is that the gulf between the growth rates of polynomial and exponential functions is enormous. Furthermore, the mathematical definition of an efficient algorithm makes it negatable since it allows us to express the notion that there is no efficient algorithm for a particular problem.
As the title reveals it, this section aims at exploring the computational intractability of problems. It seeks at showing why the mathematical definition of algorithm efficiency helps us predict the behavior of the algorithm, without being worried by the platform we are running our algorithm at. Thus an efficient algorithm has a polynomial running time and will keep its behavior no matter what platform we are using.
The question I have is: “ Are running times of algorithms completely platform-independent or I got it wrong?What I mean is,if I run an algorithm on my PC,is it gonna run as quick as if I ran it on a supercomputer? I think algorithms run faster on supercomputers due the power of the processors and other design techniques of those supercomputers. Or the running time is all about the mathematical definition and doesn't care about the actual platform it is run at?”. My best guess is that the supercomputers just add the speed at which the algorithms are processed,which is why I can't quite clearly see the difference between the mathematical definition of the running time and