[Solved]: Polynomial-time algorithm with exponential space is eligible?

Problem Detail: I’m curious about two things.

1. When we define the class called “probabilistic polynomial-time algorithm” in computer science, does it include polynomial-time algorithm with exponential space? For example, when algorithm is considered to be given a input from domain \${0,1}^n\$, what if the algorithm internally queries its exponential sized table (ex. \$0^nto0,0^{n-1}1to1\$ and so on..) and outputs the result? Does it still polynomial-time algorithm?
2. In theoretical cryptography, one-way function \$f:{0,1}^*to{0,1}^*\$ has a requirement, which is related with hard-to-invert property, as following block. If the answer to above question is yes, is it possible to construct algorithm \$A’\$ to simulate exactly same as \$f\$ for every value in \${0,1}^n\$ using exponential table as described in above question? If then, it implies that it’s impossible to design one-way function which is definitely not true. So what have i missed?

For every probabilistic polynomial-time algorithm \$A’\$, every positive polynomial \$p(cdot)\$, and all sufficiently large \$n\$’s, \$Pr[A'(f(U_n),1^n)in f^{-1}(f(U_n))]<frac{1}{p(n)}\$ where \$U_n\$ is random variable uniformly distributed over \${0,1}^n\$

Answered By : Yuval Filmus

Regarding your first question, what you’re missing is where your “exponential table” comes from. Your algorithm has a finite description and should work for every \$n\$. So it cannot explicitly contain the \$n\$-table for all \$n\$. It could contain instructions for computing the table, but then it would have to first execute them, and constructing an exponential size table takes exponential time. On the other hand, your program could use a (supposedly) exponential amount of uninitialized space. Since its running time is polynomial, it only accesses polynomially many cells. You can implement memory access in such a way that if \$T\$ cells are ever accessed then \$tilde{O}(T)\$ memory is used (exercise). The corresponding running time might become much worse, but still polynomial. A third possibility are non-uniform computation models, which are popular in cryptography. Here the idea is that the algorithm is allowed to use a certain hard-coded amount of data which depends only on \$n\$. However, this data has to be polynomial size. This restriction comes from the interpretation of the model in terms of circuits. A machine running in polynomial time corresponds to circuits for every \$n\$ of polynomial size. If we now relax the constraint that all these circuits come from some single algorithm, then we get non-uniform polynomial time, which is the same computation with advice depending on \$n\$ of polynomial size (exercise). The answer to the first question should obviate the second one. I would just mention that sometimes, instead of probabilistic polynomial-time algorithms, one considers (deterministic) polynomial size circuits.
Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/13655