[Solved]: How can repeated addition/multiplication be done in polynomial time?

Problem Detail: I can see how adding 2 unsigned four-bit values is $O(n)$. We just go from the rightmost digits to the leftmost digits and add the digits up sequentially. We can also perform multiplication in polynomial time ($O(n^2)$) via the algorithm we all learned in grade school. However, how can we add up or multiply say $i$ numbers together in polynomial time? After we add up 2 numbers together, we get a bigger number that will require more bits to represent. Same with multiplication. How can we ensure that these extra bits do not produce exponential blowup?

Asked By : David Faux

Answered By : beauby

From what you said, I suppose that you consider complexity in terms of number of bits of the input. Say you add up two numbers $a$ and $b$, with respectively $n_1$ and $n_2$ bits, then the result is at most $max(n_1, n_2) + 1$ bits since $a + b leq 2 times max(a, b)$. For the multiplication, the result is at most $2 times max(n_1, n_2)$ bits since $a times b leq max(a, b)^2$. The important thing is that the number of bits necessary to represent the sum / product of two numbers is polynomial in the number of bits to represent those numbers. Hence, for $i$ numbers, the number of bits necessary to represent the answer is still polynomial in the number of bits of the input (since composition of polynomial functions yields polynomial functions). Edit: actually, the really important thing is that $i$ is not part of the input, it is fixed. Otherwise, if you take for instance $2 times 2 times ldots times 2 = 2^i$, the answer takes $2^{i-1}$ bits, while the input takes $i + log i$ bits. But if $i$ is fixed, then $2^{i-1}$ is a constant. This kind of distinction happens quite often in computer science, especially in the field of fixed parameter tractability.
Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/6843