Question

When training a multi-layer neural network, using a sigmoidal activation function is necessary for it to learn efficiently.

Is there any advantage to using a sigmoidal activation function when training a single layer perceptron, or is a simple step (heaviside) function sufficient (or even preferable)?

I'm slowly getting my head around neural networks but any help with this would be appreciated.

Was it helpful?

Solution

Yes there is an advantage. The result can be something between 0 and 1 and doesn't have to be either YES or NO, but also MAYBE. Even for a single-neuron model it's better to have a non-step activation function.

If you need it depends on how your output is read out. Do you need binary (YES, NO) values or also something in-between?

I think you could also use a linear function, if you don't want to use a sigmoidal function.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top