Question

During hyperparameters tuning we select a metric to measure performance of the model. Example of metrics : f1 score, precision, recall, AUC ...

In general, for the training of neural networks, back-propagation tries to optimize the weights of the model according to the value of the loss function.

Here comes the question: Why don't people use the loss function as a main performance metric for neural networks optimization?

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top