Question

I've been writing a neural network from scratch. I've completed the feedforward, backpropagation, and mini-batch gradient descent methods, so I can train the network. Other neural networks I've worked with usually display the error/loss after each batch as a single decimal value, and I'd like to implement this functionality but I'm not sure how.

I understand squared error is given by $(y - \hat{y})^2$, and that for an output layer with $m$ neurons, you should have an error vector of size $m$. However, how is the error vector displayed as one value?

Was it helpful?

Solution

You can simply sum the absolute m error to have 1 overall error value of the neural network. You can also compute mean square error. Whatever error you choose will not change much, in both cases the smaller the error the better your neural network. Even if by convention we prefer to take the mean squared error. enter image description here

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top