Say we have a set of target values $\{y^{(i)}\}_{i=1}^N$ and we have a set of predictions from two different OLS models, i.e., $\{y^{(i)}_A\}_{i=1}^N$ and $\{y^{(i)}_B\}_{i=1}^N$.
I am trying to prove that the difference in expected values of the squared error, i.e.,
$$\mathbb{E}[(y_A - y)^2] - \mathbb{E}[(y_B - y)^2]$$
is equivalent to the expected value of the difference in the predictions squared, i.e.,
$$\mathbb{E}[(y_A - y_B)^2]$$
I have tried expanding out the squared differences but I cant see where two start really with this, any help would be appreciated.
Potential solution (not verified):
If $y_A$ and $y_B$ are predictions from an unbiased OLS estimator, then we have: \begin{align} \mathbb{E} \left[ (y_{A} - y)^2 \right] &= \sigma^2_{y_{A}} + \mathbb{E} \left[ y_{A} - y \right]^2 \\ &= \sigma^2_{y_{A}} + 0. \\ &= \mathbb{E} \left[ y_{A}^2 \right] - \mathbb{E} \left[ y_{A} \right]^2 \end{align}
Substituting this above:
\begin{align} \mathbb{E}[(y_A - y)^2] - \mathbb{E}[(y_B - y)^2] &= \mathbb{E}_t \left[ y_{A}^2 \right] - \mathbb{E}_t \left[ y_{A} \right]^2 - \mathbb{E}_t \left[ y_{B}^2 \right] + \mathbb{E}_t \left[ y_{B} \right]^2 \\ &= \mathbb{E}_t \left[ y_{A}^2 \right] - \mathbb{E}_t \left[ y_{B}^2 \right] \\ &= \mathbb{E}_t \left[ y_{A}^2 \right] + \mathbb{E}_t \left[ y_{B}^2 \right] - 2\mathbb{E}_t \left[ y_{B}^2 \right] \\ &= \mathbb{E}_t \left[ y_{A}^2 \right] + \mathbb{E}_t \left[ y_{A}^2 \right] - 2 \left( \sigma^2_{y_{B}} + \mathbb{E}_t \left[ y_{B} \right]^2 \right) \\ &= \mathbb{E}_t \left[ y_{A}^2 \right] + \mathbb{E}_t \left[ y_{A}^2 \right] - \\ & \quad \quad \quad 2 \left( \rho_{\left( y_{A}, y_{B} \right)} + \mathbb{E}_t \left[ y_{A} \right] \mathbb{E}_t \left[ y_{B} \right] \right) \\ &= \mathbb{E}_t \left[ y_{A}^2 \right] + \mathbb{E}_t \left[ y_{A}^2 \right) - 2 \mathbb{E}_t \left[ y_{A} \cdot y_{B} \right] \\ &= \mathbb{E}_t \left[ \left( y_{A} - y_{B} \right)^2 \right], \label{mc_ols_reform} \end{align}
where $\rho(\cdot, \cdot)$ is covariance.