문제

I'm reading this article Understanding the BiasVariance Tradeoff. It mentioned:

If we denote the variable we are trying to predict as $Y$ and our covariates as $X$, we may assume that there is a relationship relating one to the other such as $Y=f(X)+\epsilon$ where the error term $\epsilon$ is normally distributed with a mean of zero like so $\epsilon\sim\mathcal{N}(0,\,\sigma_\epsilon)$.

We may estimate a model $\hat{f}(X)$ of $f(X)$. The expected squared prediction error at a point $x$ is: $$Err(x)=E[(Y-\hat{f}(x))^2]$$ This error may then be decomposed into bias and variance components: $$Err(x)=(E[\hat{f}(x)]-f(x))^2+E\big[(\hat{f}(x)-E[\hat{f}(x)])^2\big]+\sigma^2_e$$ $$Err(x)=Bias^2+Variance+Irreducible\ Error$$

I'm wondering how do the last two equations deduct from the first equation?

올바른 솔루션이 없습니다

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 datascience.stackexchange
scroll top