Asked By : vz0
Answered By : jmad
about an algorithm that on a non-lazy functional programming language has an $Omega(n log n)$ complexity, while the same algorithm in imperative programming is $Omega(n)$. Adding lazyness to the FP language would make the algorithm $Omega(n)$.
I would like to see this reference. The usual assumption is that an access to a array of length $n$ in a RAM is in time $O(1)$ and the equivalent in pure FP is in in time $O(log n)$. That is not entirely true: the access time in a RAM is in $O(log m)$ where $m$ is the size of the memory. Of course, $m≥n$. In practice accessing an element of an array is much faster. A reason would be that $m$ is bounded but… so is $n$! EDIT: thank you for the link (the link for the paper about laziness is not available, here is another one). As posted in the comments and above in my answer, the model of RAM is a little unfair to pure FP by providing $O(1)$-time look-ups even when the size of one address is not bounded. I am yet to understand how the lazy trick works but I think it is important to note that this is only for this particular problem.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/2240