質問

I am using a PyBrain BackPropTrainer on a RecurrentNetwork with multiple output layers. I need to get the training error for each of these layers individually. How should I go about this - i.e. do I extend the source code itself, or is there a way to do this already provided?

I have looked at BackPropTrainer.train() however this returns only a single value for the entire network, each training step.

This question addresses getting the activation values for an individual module, but only after training.

Not sure where to turn from here.

Thanks!

役に立ちましたか?

解決

I had no idea the solution was so simple. Just test the network after each training step, and use the activation values thus produced.

If it really is necessary to get the exact errors produced during training, one could subclass BackPropTrainer and modify the train and _calcDerivs functions to return the error for each individual unit.

ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top