I have some problem with backpropagation learning using AForge.NET - Neuro Learning - Backpropagation . I actually try to implement neural network as in samples (Aproximation). My problem is about this: 1. input vector {1,2,3,...,19,20} 2. output vector {1,2,3,...,19,20} (it's linear function) 3. ActivationNetwork network = new ActivationNetwork(new BipolarSigmoidFunction(2), 1, 20, 1); 4. Then about 10k times - teacher.RunEpoch(input, output);

When learning is complete my network.Compute() returns values in [-1;1] Why?

In sample there is something like normalising values of vectors ( x -> [-1; 1] and y -> [-0.85; 0.85] ) and when I do it everything is fine... but it's only sample with which I want to learn about how neural networks working. My current problem which I want to implement is more complex (It more than 40 input neurons)

Can anyone help me?

有帮助吗?

解决方案

I did not work with AForge yet, but the BipolarSigmoidFunction is most probably tanh, i.e. the output is within [-1, 1]. This is usually used for classification or sometimes for bounded regression. In your case you can either scale the data or use a linear activation function (e.g. identity, g(a) = a).

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top