Efficiency of algorithms is defined in the terms of the running time and the algorithms must be efficient in their use of other resources as well, especially the amount of space(or memory) they use. To analyze algorithms, we focus on the worst case running time where we look for a bound on the largest possible running time the algorithm could have over all inputs of a given size N, and see how it scales with N. The worst-case analysis is the best used when analyzing algorithms. An efficient algorithm is an algorithm that is platform-independent, instance-independent and has predictive value with respect to increasing input sizes. Such an algorithm must achieve qualitatively better worst-case performance at an analytical level than brute-force search. Let us suppose that for a given algorithm, there are absolute constants c > 0 and d >0 so that on every input of size N, its running time is bounded by cNd primitive computational steps.This means that its running time is at most proportional to Nd. If this running-time bound holds, for some c and d,then the algorithm has a polynomial running time and is called a polynomial-time algorithm. We define an efficient algorithm as an algorithm that has a polynomial running time. This concrete mathematical definition of an efficient algorithm has turned out to correspond in practice to what we observe about the efficiency of algorithms, and the tractability of problems, in real life. One further reason why the mathematical definition and the empirical evidence seem to line up well in the case of polynomial-time solvability is that the gulf between the growth rates of polynomial and exponential functions is enormous. Furthermore, the mathematical definition of an efficient algorithm makes it negatable since it allows us to express the notion that there is no efficient algorithm for a particular problem.
As the title reveals it, this section aims at exploring the computational intractability of problems. It seeks at showing why the mathematical definition of algorithm efficiency helps us predict the behavior of the algorithm, without being worried by the platform we are running our algorithm at. Thus an efficient algorithm has a polynomial running time and will keep its behavior no matter what platform we are using.
The question I have is: “ Are running times of algorithms completely platform-independent or I got it wrong?What I mean is,if I run an algorithm on my PC,is it gonna run as quick as if I ran it on a supercomputer? I think algorithms run faster on supercomputers due the power of the processors and other design techniques of those supercomputers. Or the running time is all about the mathematical definition and doesn't care about the actual platform it is run at?”. My best guess is that the supercomputers just add the speed at which the algorithms are processed,which is why I can't quite clearly see the difference between the mathematical definition of the running time and what actually happens in case of supercomputers.Or the running time is just how long it will take, not how how fast?Faster algorithms are always fast,but what about those algorithms that are not that fast?I think I the supercomputers just make those algorithms run faster thanks to their processors and the way they are designed. It's a bad thing the authors of the book didn't discuss that simple confusion that may arise though.
After rereading this section, I got the real meaning of the computational intractability, and realized the main reason why we have to describe the behavior of algorithms in terms of mathematical notations. Among other reasons, it's because it's easy to predict the behavior of the algorithms,and to be able to have a precise reference point to which compare your algorithm so you can see if it's as efficient as you can possibly think about.
I definitely want to remember the reason why we have to express the algorithms in mathematical notations!As for the reading, it gave a good introduction to the chapter and showed to the reader what to expect from the chapter. I give it an 8/10.