In Chapter6.10.3 'Net pruning', page53 of An introduction to neural networks __ Kevin Gurney. It introduce the complexity penalty into the back-propagation training algorithm. The complexity penalty is like as follow:

$$ E_c=\sum_{i}w_i $$
$$ E = E_t + \lambda E_c $$

Et is error used so far based on input-output differences.
Then performing gradient descent on this total risk E.

My question : After doing derivation. The complexity penalty will dissapear. How can it affect the training

有帮助吗?

解决方案

E_c should be the sum of the absolute value of each w (L1). or the squared sum(L2)

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top