Question

I am learning some model based on examples ${((x_{i1},x_{i2},....,x_{ip}),y_i)}_{i=1...N}$ using a neural network of Feed Forward Multilayer Perceptron (newff) (using python library neurolab). I expect the output of the NN to be positive for any further simulation of the NN. How can I make sure that the results of simulation of my learned NN are always positive? (how I do it in neurolab?)

Was it helpful?

Solution

Simply use a standard sigmoid/logistic activation function on the output neuron. sigmoid(x) > 0 forall real-valued x so that should do what you want.

By default, many neural network libraries will use either linear or symmetric sigmoid outputs (which can go negative).

Just note that it takes longer to train networks with a standard sigmoid output function. It's usually better in practice to let the values go negative and instead transform the outputs from the network into the range [0,1] after the fact (shift up by the minimum, divide by the range (aka max-min)).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top