Question

I was wondering: in a multi-layer feed-forward neural network should the input layer include a bias neuron, or this is just useful in hidden layers? If so, why?

Was it helpful?

Solution

No, an input layer doesn't need a connection to the bias neuron, since any activation it received from the bias neuron would be completely overridden by the actual input.

For example, imagine a network that's trying to solve the classic XOR problem, using this architecture (where the neuron just marked 1 is the bias):

enter image description here

To run this network on input (1,0), you simply clamp the activation of neurons X1=1 and X2=0. Now, if X1 or X2 had also received input from the bias, then that input would be overridden anyways, thus making such a connection pointless.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top