I am experimenting with Elman and/or Jordan ANN's using the Encog framework. I am trying to code my own but studying how Encog has it implemented. I see how the backpropagation through time updates the weights, but how do the context neurons get updated? There values seem to fluctuate somewhat randomly as the output of the neural network is calculated. How is it that these values actually allow the simple recurrent neural network to actually recognize patterns in input data over time?

有帮助吗?

解决方案

The context neurons neuron values themselves are not updated as training progresses. The weights between them and the next layer ARE updated during training. The context values will be updated as the neural network runs.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top