Question

So I was wondering how does one, for example, can best optimize the model they are trying to build when confronted with issues presented by high bias or high variance. Now, of course, you can play with the regularization parameter to get to a satisfying end, but I was wondering whether it is possible to do this without relying on regularization.

If b is the bias estimator of a model and v of its variance, wouldn't it make sense to try to minimize b*v?

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top