質問

So I was wondering how does one, for example, can best optimize the model they are trying to build when confronted with issues presented by high bias or high variance. Now, of course, you can play with the regularization parameter to get to a satisfying end, but I was wondering whether it is possible to do this without relying on regularization.

If b is the bias estimator of a model and v of its variance, wouldn't it make sense to try to minimize b*v?

正しい解決策はありません

ライセンス: CC-BY-SA帰属
所属していません datascience.stackexchange
scroll top