Let $(X,Y,Z)$ be a square integrable discrete-time Markov martingale, that is, $E[Z|X]=X$, $E[Y|X]=X$, $E[Z|X,Y]=E[Z|Y]=Y$. Let $U$ be a square integrable random variable independent from $(X,Y,Z)$. Is it true that
$$ E[ (Z- E[Z|X, Z+U])^2 ] \leq E[ (Z- E[Z|Y, Z+U])^2 ] + E[ (Y- E[Y|X, Y+U])^2 ]$$
This inequality means that the error that one makes when estimation $Z$ based on $X$ and a noisy version of $Z$ is smaller than the sum of the error that one makes when estimation $Z$ based on $Y$ and a noisy version of $Z$ and the error that one makes when estimation $Y$ based on $X$ and a noisy version of $Y$.
The inspiration for this inequality is that, if we remove the noisy terms $Z+U$ and $Y+U$ everywhere, the inequality is true since $$ E[ (Z- E[Z|X])^2 ] = E[ (Z- X)^2 ] \leq E[ (Z- Y)^2 ] + E[ (Y- X)^2 ]$$
Now, adding the noisy terms $Z+U$ and $Y+U$ inside the conditioning is making all the estimates (the conditional expectations) better, i.e., the errors are smaller. Since $U$ is common to all terms and is independent from the martingale, we are making them better "in the same way" (hand waving). This is the intuition behind
$$ E[ (Z- E[Z|X, Z+U])^2 ] \leq E[ (Z- E[Z|Y, Z+U])^2 ] + E[ (Y- E[Y|X, Y+U])^2 ]$$
Any thoughts/proof?
I feel there are some problems with this conjecture.
As an example, consider the following scenario. Assume $X$ is always $1$, $Y$ is uniformly distributed in $\{0, 2\}$, and $Z \sim N(Y, 1)$. Furthermore, $U$ is uniformly distributed in $\{0, 1\}$.
Then the second term on RHS in your conjecture is zero, since $Y + U$ uniquely determines $Y$. So we are left with $$E[(Z - E[Z | Z + U])^2] \leq E[(Z - E[Z | Y, Z + U])^2].$$ Or $$E[Z^2] - E[E[Z | Z + U]^2] \leq E[Z^2] - E[E[Z | Y, Z + U]^2].$$ Or $$E[(E[Z | Z + U] - E[Z | Y, Z + U])^2] \leq 0$$ which is not true since $E[Z | Z + U] \neq E[Z | Y, Z + U]$.