Asked By : Quora Feans
Answered By : Pavel Zaichenkov
- the size of input problem is 10 million numbers: $n=10^7$;
- computer A executes $10^{10}$ instructions per second (~ 10GHz);
- computer B executes only $10^7$ instructions per second (~ 10MHz);
- the constant factors are $c_1=2$ (what is slightly overestimated) and $c_2=50$ (in reality is smaller).
So with these assumptions it takes $$ frac{2 cdot (10^7)^2 text{ instructions}} {10^{10} text{ instructions}/text{second}} = 2 cdot 10^4 text{ seconds} $$ for the computer A to sort $10^7$ numbers and $$ frac{50 cdot 10^7 lg 10^7 text{ instructions}} {10^{7} text{ instructions}/text{second}} approx 1163 text{ seconds}$$ for the computer B. So the computer, which is 1000 times slower, can solve the problem 17 times faster. In reality the advantage of merge sort will be even more significant and increasing with the size of the problem. I hope this example helps to answer your question. However, this is not all about algorithm complexity. Today it is almost impossible to get a significant speedup just by the use of the machine with higher CPU frequency. People need to design algorithms for multi-core systems that scale well. This is also a tricky task, because with the increase of cores, an overhead (for managing memory accesses, for instance) increases as well. So it’s nearly impossible to get a linear speedup. So to sum up, the design of efficient algorithms today is equally important as before, because neither frequency increase nor extra cores will give you the speedup compared to the one brought by the efficient algorithm.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/15017