Pergunta

In Chapter6.10.3 'Net pruning', page53 of An introduction to neural networks __ Kevin Gurney. It introduce the complexity penalty into the back-propagation training algorithm. The complexity penalty is like as follow:

$$ E_c=\sum_{i}w_i $$
$$ E = E_t + \lambda E_c $$

Et is error used so far based on input-output differences.
Then performing gradient descent on this total risk E.

My question : After doing derivation. The complexity penalty will dissapear. How can it affect the training

Foi útil?

Solução

E_c should be the sum of the absolute value of each w (L1). or the squared sum(L2)

Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top