Question

I'm having trouble understanding why I would use dropout, regularization, data augmentation, etc to get rid of overfitting in the first place. I get that if your model is too large or data is too sparse then your model may start to memorize data and not perform well on new data. However, are there any cases in which adding dropout, regularization, etc would increase accuracy on the validation set? For instance, if my training acc is 95% and val accuracy is 70%, would removing overfitting simply bring the training accuracy down lower to the val accuracy? Or is there a way to actually improve training accuracy? I assume there is but some intuition on this would be very much appreciated!

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top