probability: best linear predictor $\hat{Y} = aX + b$

141 Views Asked by At

Let $X\sim\mathcal{U}(-1, 1)$ and $Y = X^2$.

Since the best linear predictor is defined as $$ \hat{Y} = E_Y[Y] + \frac{\text{cov}(X, Y)}{\text{var}(X)}(x - E_X[X]) $$ Can I simple just write it as $$ \hat{Y} = E_Y[X^2] + \frac{\text{cov}(X, X^2)}{\text{var}(X)}(x - E_X[X]) $$ and since $E[X] = 0$, $E[X^2] = 1$, and $E[X^3] = 0$, we have that \begin{align} \hat{Y} &= 1 + (E[XY] - E[X]E[Y])(x - 1)\\ &= 1 + (E[X^3] - E[X]E[X^2])(x - 1)\\ &= 1 + (0 - 0\cdot 1)(x - 1)\\ &= 1 \end{align} This just seems like a strange answer though.

1

There are 1 best solutions below

1
On BEST ANSWER

I don't see how you get $E[X^2]=1$. In fact, it is 1/3.

Other than that, you are correct: since $\text{cov}(X,X^2)=0$, the best linear predictor is a constant.