- The naive algorithms runs in $O(n)$, if we ignore the cost of addition.
- The Binet’s formula or matrix exponentiation method should both theoretically run in $O(lg(n))$ (since exponentiation of both matrices and real numbers takes $O(lg(n))$ steps)
The problem arises when you analyze the size of Nth Fibonacci number by assuming that after first few members of the sequence, the ratio between two numbers is at least 1.5 (picked randomly, I assume it can be easily proved by induction). We can then bound the number to be at least as big as: $c_1*(1.5)^n$. Its logarithm gives us the number of digits the Nth Fibonacci number has ($c_2*n$). Am I right to assume that you can’t print out (or calculate) linear number of digits in sublinear time? My explanation of this “paradox” is that I forgot to add multiplication costs in 2nd algorithm (Binet/matrix), making its complexity $n*lg(n)$. I’ve found out that naive algorithm (1st) runs better for very small inputs, and 2nd algorithm runs better for bigger ones (python3). Is my explanation of complexity correct, and should the naive algorithm get better running time at even larger inputs ($n>10^9$ or such)? I do not consider this to be a duplicate question, since it adresses the problem of arbitary values and arbitary integer aritmetic.
Asked By : Petar Mihalj
Answered By : sashas
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/62401