Problem Detail: I have a multilayer perceptron. It has an input layer with two neurons, a hidden layer with an arbitrary number of neurons, and an output layer with two neurons. Given that randomboolean and targetboolean are random boolean values, and the network operates as such:
input(randomboolean); //Set the input neurons to reflect the random boolean propagateforwards(); //Perform standard forward propagation outputboolean = output(); //To get the networks output ideal(targetboolean); //Performs connection updating via back-prop
Is it possible to get the network to map the randomboolean value to the targetboolean value in such a way as the the outputboolean value will correctly match the targetboolean while running in an ‘on-line’ (where prediction occurs along with continued learning) mode after some arbitrary number of training cycles. I hear that the network needs to be recurrent to process this as it may be temporal behaviour, however the MLP is a universal computing platform and I assume it should be able to approximate the temporal behaviour needed for this task.
Asked By : marscom
Answered By : Anton
The answer is no. What you want to do is to predict randomness. The perceptron network takes randomboolean(true/false) and it outputs outputboolean(not random!!). The random generation of targetboolean is independent from the generation of outputboolean. Perceptrons generally learn functions. If you have $f(A)=B$ and $f(A)=C$ and $Bneq C$, then $f$ is not a function. EDIT: To predict temporal behavior you should add some time dependent variable in the input of the network.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/9137 Ask a Question Download Related Notes/Documents