I am currently studying the expected mean squared error and the derivations of this are as follows:
\begin{align} &E[(y_i - \hat{y}_0 | x_0]\\ &=E [ (f(x) + \epsilon_i - \hat{f}(x_i))^2 | x_0]\\ &=E [(f(x) - \hat{f}(x_i))^2|x_0] + 2 E [(f(x_i)-\hat{f}(x_i))\epsilon_i | x_0] + E[\epsilon_0^2 |x_0]\\ \end{align}
From which we can further get the bias and variance. But I do not understand the following, what rule is used in order to rewrite it using the following part: $2 E [(f(x_i)-\hat{f}(x_i))\epsilon_i | x_0]$ ?
It’s important to note that the expected value operator $\mathbb{E}[\cdot]$ is a linear operator. Therefore, it holds with two random variables $X, Y$ and constant scalar $\lambda$:
So you only need to apply binomial expansion to obtain: $$\mathbb{E}[((f(x) - \hat{f}(x_i)) + \epsilon_i)^2] =\\ \mathbb{E}[(f(x) - \hat{f}(x_i))^2 + 2 (f(x_i)-\hat{f}(x_i))\ \epsilon_i + \epsilon_i^2] =\\ \mathbb{E}[(f(x) - \hat{f}(x_i))^2] + 2 \mathbb{E}[(f(x_i)-\hat{f}(x_i))\ \epsilon_i] + \mathbb{E}[\epsilon_i^2]$$.