문제

In a linear model, regularization decreases the slope. Do we just assume that fitting a lin model on training data overfits by almost always creating a slope which is higher than it would be with infinite observations instead? What is the intuition?

도움이 되었습니까?

해결책

Regularization is used to help smooth multi-dimensional models. Take this example,

y = x_1 + eps*(x_2 + ... + x_100)

Let's say eps is very small. It doesnt seem very useful to store those 99 coefficients, isn't it? How do we manage to fit a model in such a way that we drop negligible coefficients? This is exactly what L1-regularisation does!

Each other type of regularisation has another geometric intuition.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 datascience.stackexchange
scroll top