문제

I am trying to train a simple neural network with Pybrain. After training I want to confirm that the nn is working as intended, so I activate the same data that I used to train it with. However every activation outputs the same result. Am I misunderstanding a basic concept about neural networks or is this by design?

I have tried altering the number of hidden nodes, the hiddenclass type, the bias, the learningrate, the number of training epochs and the momentum to no avail.

This is my code...

from pybrain.tools.shortcuts import buildNetwork                                
from pybrain.datasets import SupervisedDataSet                                  
from pybrain.supervised.trainers import BackpropTrainer

net = buildNetwork(2, 3, 1)  
net.randomize()                                                    

ds = SupervisedDataSet(2, 1)                                                       
ds.addSample([77, 78], 77)                                                         
ds.addSample([78, 76], 76)                                                         
ds.addSample([76, 76], 75)                                                         

trainer = BackpropTrainer(net, ds)                                                 
for epoch in range(0, 1000):                                                                   
    error = trainer.train()                                                                    
    if error < 0.001:                                                                          
        break                                                      

print net.activate([77, 78])                                                       
print net.activate([78, 76])                                                       
print net.activate([76, 76])  

This is an example of what the results can be... As you can see the output is the same even though the activation inputs are different.

[ 75.99893007]
[ 75.99893007]
[ 75.99893007]
도움이 되었습니까?

해결책 2

In the end I solved this by normalizing the data between 0 and 1 and also training until the error rate hit 0.00001. It takes much longer to train, but I do get accurate results now.

다른 팁

I had a similar problem, I was able to improve the accuracy (I.E. get different answer for each input) by doing the following.

  1. Normalizing/Standardizing input and output to the neural network

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top