Inequality for conditional expectation

707 Views Asked by At

I have three random variables that are dependent, $\theta, Y,X$. Under which conditions on the distributions does the following implication hold:

For a known function $g(.)$, two different realizations $Y=y$ and $Y=y'$ and the same realization $X=x$, $$E[g(\theta)|Y=y]\neq E[g(\theta)|Y=y']\implies E[g(\theta)|X=x,Y=y]\neq E[g(\theta)|X=x,Y=y']?$$

It seems to me this will boil down to finding conditions for the consequent to be true, i.e., $E[g(\theta)|X=x,Y=y]\neq E[g(\theta)|X=x,Y=y']$ for $y\neq y'$.

My thoughts: \begin{equation} E[g(\theta)|X=x,Y=y]=\int g(\theta) f(\theta|X=x,Y=y) d\theta, \end{equation} so if $f(\theta|X=x,Y=y)$ first order stochastically dominates $f(\theta|X=x,Y=y')$ or vice versa, then $E[g(\theta)|X=x,Y=y]\neq E[g(\theta)|X=x,Y=y']$ (assuming $g$ is strictly increasing or decreasing). This is of course only a sufficient and not necessary condition, but still it is not clear to me when first order stochastic dominance of $f(\theta|Y=y)$ in $y$ implies first order stochastic dominance of $f(\theta|X=x,Y=y)$.

Any ideas on that are very much appreciated. Thanks!

1

There are 1 best solutions below

1
On

$$E[g(\theta)|Y=y]\neq E[g(\theta)|Y=y']\implies E[g(\theta)|X=x,Y=y]\neq E[g(\theta)|X=x,Y=y']?$$

The right side (, which holds for all $x$) is more restrictive/stringent than the left side, and thus the proposition should not be true in general under no additional assumptions.

Of course, a "trivial" interpretation of the inequality suggests that the right side will be true if "the actual realisation of $X$ does not contribute any additional information to the computation of the expectation", which suggests that $X$ and $Y$ should be independent. However, there are three random variables, $\Theta$, $X$ and $Y$, and sometimes ensuring only pairwise independence is not sufficient.

A somewhat "formal" condition on $X$ is that $X$ is independent given $Y$ and $\Theta$. If so,

$$ f(\theta, x, y) = f(\theta, y) f_{|\theta, y}(x) = f(\theta, y) f(x)$$, where $f$ represents the p.d.f of a given random variable, $f_{|\theta, y}(x)$ if the p.d.f. of $X$ given $\Theta$, $Y$. In the last step, we made use of the independence of $X$ given $\Theta$ and $Y$.

If the previous condition is satisfied, then $$E[g(\Theta)|Y=y]={\int g(\theta) f(\theta, y) d\theta \over \int f(\theta, y) d\theta}$$, while $$E[g(\Theta)|X=x, Y=y] = {\int g(\theta)f(\theta, x, y)d\theta \over \int f(\theta, x, y) d\theta} = {\int g(\theta)f(\theta, y)f(x)d\theta \over \int f(\theta, y)f(x) d\theta} = {\int g(\theta)f(\theta, y)d\theta \over \int f(\theta, y) d\theta}$$ , where in the second last step we've used the independence of $X$ given $\Theta$ and $Y$.