Problem Detail: Let a binary counter with the operations INCREMENT and DECREMENT. I need to show that you can’t implement this kind of counter with constant amortized time per operation. Hence, I need to show that there’s a series of $N$ operations with amortized time of $omega(N)$. My Try:
Let’s assume we made $2^k-1$ INCREMENT operations. Hence, our counter is a sequence with $k$ 1’s. Now, Let’s consider a sequence of $N$ operations alternating between DECREMENT and INCREMENT (DEC,INC,DEC,INC,DEC…) Each operation must be $Theta(k)$. Somehow, I need to figure out that it has to be that the amortize time is $omega(N)$. How?
Let’s assume we made $2^k-1$ INCREMENT operations. Hence, our counter is a sequence with $k$ 1’s. Now, Let’s consider a sequence of $N$ operations alternating between DECREMENT and INCREMENT (DEC,INC,DEC,INC,DEC…) Each operation must be $Theta(k)$. Somehow, I need to figure out that it has to be that the amortize time is $omega(N)$. How?
Asked By : AlonAlon
Answered By : Yuval Filmus
Your solution is on track. As you comment, if you increment a counter $2^k-1$ times and then do $m$ increment/decrement operations, in total you must have modified at least $km$ array positions (this will serve as our lower bound on the running time). In your case $N = 2^k-1 + m$. If you choose, for example, $m = 2^k+1$, then $N = 2^{k+1}$ while the lower bound is $km geq log_2 N cdot (N/2) = Omega(Nlog N)$. This shows that the amortized running time is $Omega(log N)$, which is certainly superconstant. We can easily get a matching (non-amortized) upper bound. If we only perform $N$ operations, then the value of the counter is in the range $[-N,N]$, and so takes $O(log N)$ bits to represent. It is not difficult to implement increment and decrement so that they take time linear in the length of the representation, so $O(log N)$ in this case.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/33350