Does it makes any sense that weights and threshold are growing proportionally when training my perceptron?

StackOverflow https://stackoverflow.com/questions/7175401

문제

I am moving my first steps in neural networks and to do so I am experimenting with a very simple single layer, single output perceptron which uses a sigmoidal activation function. I am updating my weights on-line each time a training example is presented using:

weights += learningRate * (correct - result) * {input,1}

Here weights is a n-length vector which also contains the weight from the bias neuron (- threshold), result is the result as computed by the perceptron (and processed using the sigmoid) when given the input, correct is the correct result and {input,1} is the input augmented with 1 (the fixed input from the bias neuron). Now, when I try to train the perceptron to perform logic AND, the weights don't converge for a long time, instead they keep growing similarly and they maintain a ratio of circa -1.5 with the threshold, for instance the three weights are in sequence:

5.067160008240718   5.105631826680446   -7.945513136885797
...
8.40390853077094    8.43890306970281    -12.889540730182592

I would expect the perceptron to stop at 1, 1, -1.5.

Apart from this problem, which looks like connected to some missing stopping condition in the learning, if I try to use the identity function as activation function, I get weight values oscillating around:

0.43601272528257057 0.49092558197172703 -0.23106430854347537

and I obtain similar results with tanh. I can't give an explanation to this.

Thank you
Tunnuz

도움이 되었습니까?

해결책

It is because the sigmoid activation function doesn't reach one (or zero) even with very highly positive (or negative) inputs. So (correct - result) will always be non-zero, and your weights will always get updated. Try it with the step function as the activation function (i.e. f(x) = 1 for x > 0, f(x) = 0 otherwise).

Your average weight values don't seem right for the identity activation function. It might be that your learning rate is a little high -- try reducing it and see if that reduces the size of the oscillations.

Also, when doing online learning (aka stochastic gradient descent), it is common practice to reduce the learning rate over time so that you converge to a solution. Otherwise your weights will continue to oscillate.

When trying to analyze the behavior of the perception, it helps to also look at correct and result.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top