I cannot really get (in terms of math steps) from where statement (3) in this setting comes from.
Framework:
- $Y$ is a r.v. uniformly distributed on an interval $[a, b] \subseteq \mathbb{R}$,
- $y$ is the realization of $Y$ and is unobservable,
- $X_1$ is a signal on $Y$ for the decision maker 1 (DM1),
- $x_1 := y + \eta$ is the observed signal by DM1, with $\eta$ uniformly distributed on $[-\varepsilon ,\varepsilon]$.
For the following statements, assume a realization of $y$ (as written before, unobservable).
Statements:
DM1's signal is uniformly distributed on $[y -\varepsilon, y +\varepsilon]$.
Given signal $x_1$, the posterior of DM1 is uniformly distributed on $[x_1 - \varepsilon, x_1 + \varepsilon]$.
The conditional expectation of $y$ given $x_1$ is equal to $x_1$.
Thoughts:
It makes sense, indeed both $Y$ and $\eta$ are uniformly distributed. Hence, a fortiori $X_1$ is uniformly distributed according to $x_1 := y + \eta$.
Once we have a signal $x_1$, we know that $y = x_1 - \eta$, again uniformly distributed, because $y$ is uniform and $\eta$ as well.
Problem: I do not know from where (3) comes from.
Is there somebody who can clarify how we get to that solution?
Of course, I would greatly appreciate any feedback concerning my thoughts on the other statements.
Thank you for your time.
To justify statement $(3)$ you only need that $\Bbb E[η]=0$ since $η \sim {\rm U}[-ε,ε]$. Hence, since $x_1:=y+η \implies y=x_1-η$, you have that $$\Bbb E[y\mid x_1]=\Bbb E[x_1-η\mid x_1]=\Bbb E[x_1\mid x_1]-\Bbb E[η]=x_1-0=x_1$$