NC is the class of problems that can be solved by a parallel algorithm in time $O(log^cn)$ on $p(n) in O(n^k)$ processors with $c,k in mathbb{N}$.
We can assume a PRAM. My problem is that this does not seem to say much about “real” machines, that is machines with a finite amount of processors. Now I am told that “it is known” that we can “efficiently” simulate a $O(n^k)$ processor algorithm on $p in mathbb{N}$ processors. What does “efficiently” mean here? Is this folklore or is there a rigorous theorem which quantifies the overhead caused by simulation? What I am afraid that happens is that I have a problem which has a sequential $O(n^k)$ algorithm and also an “efficient” parallel algorithm which, when simulated on $p$ processors, also takes $O(n^k)$ time (which is all that can be expected on this granularity level of analysis if the sequential algorithm is asymptotically optimal). In this case, there is no speedup whatsover as far as we can see; in fact, the simulated parallel algorithm may be slower than the sequential algorithm. That is I am really looking for statements more precise than $O$-bounds (or a declaration of absence of such results).
Asked By : Raphael
Answered By : Tsuyoshi Ito
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/1647