Are conditional expectation values the same if expectation values are? Part II

121 Views Asked by At

Assume ${\bf x},{\bf z} \in \mathbb{R}^n$ denote real-valued and bounded random variables with continuous probability density $p({\bf x},{\bf z})$ and $f({\bf x})$ and $g({\bf x})$ are real-valued bounded scalar functions. Furthermore, $P_k({\bf z})$ denote all monomials for ${\bf z}$ indexed by $k$ (so that the expectation values of $E[P_k]$ are the corresponding moments of $\bf z$).

Then, does the following implication hold?

For all $k$: $ E\left[f({\bf x})P_k({\bf z})\right] = E\left[g({\bf x})P_k({\bf z})\right]~~~ \implies ~~~E\left.\left[f({\bf x})\right|{\bf z}\right] = E\left.\left[g({\bf x})\right|{\bf z}\right]$

Note: All moments and expectation values exist because the random variables and functions are bounded.


I tried the following. (I'm not so sure with the exchange of the limits and the uniform convergence.)

Because the premise holds for all polynomials of $\bf z$ I can represent for each $\epsilon$ the $\eta_\varepsilon({\bf z})$ (Gaussian bump representation) in the Dirac-Delta function in $\mathbb{R}^n$

$\delta({\bf z})=\lim_{\varepsilon\to 0^+} \eta_\varepsilon({\bf z})$

as the powerseries of the exponential function in the Gaussian bump representation with coefficients $c_k$. This powerseries converges uniformly to the exponential function and, thus, the arguments of the expectation values converge uniformly to $f({\bf x})\eta_\varepsilon({\bf z})$ and $g({\bf x})\eta_\varepsilon({\bf z})$ (for arbitrary bounded $f$ and $g$). Then, I can exchange the integral of the expectation with the infinite sum and obtain

$$ \sum_k c_k E\left[f({\bf x})P_k({\bf z})\right] = \sum_k c_k E\left[g({\bf x})P_k({\bf z})\right] \\ E\left[f({\bf x})\sum_k c_k P_k({\bf z})\right] = E\left[g({\bf x})\sum_k c_kP_k({\bf z})\right]\\ E\left[f({\bf x})\eta_\varepsilon({\bf z} -{\bf z}_0)\right] = E\left[g({\bf x})\eta_\varepsilon({\bf z} -{\bf z}_0)\right]\\ \lim_{\varepsilon\to 0^+}E\left[f({\bf x})\eta_\varepsilon({\bf z} -{\bf z}_0)\right] = \lim_{\varepsilon\to 0^+}E\left[g({\bf x})\eta_\varepsilon({\bf z} -{\bf z}_0)\right]\\ E\left[f({\bf x})\delta({\bf z} -{\bf z}_0)\right] = E\left[g({\bf x})\delta({\bf z} -{\bf z}_0)\right]\\ E\left[f({\bf x})|{\bf z}_0\right] = E\left[g({\bf x})|{\bf z}_0\right]$$

Is this correct? Of yourse, I would be happy if the statement could be proven without the assumption of a continuous probability density $p({\bf x},{\bf z})$, but I don't see a reason why it should be true.

1

There are 1 best solutions below

0
On BEST ANSWER

Your desired conclusion does hold without the assumption of a probability density of $\bf x$ and $\bf z$, provided that the functions $f$ and $g$ are assumed to be Borel measurable.

Indeed, let $X:=f(\bf x)-g(\bf x)$ and $Z:=\bf z$, so that $X$ and $Z$ are bounded random variables with values in $\mathbb R$ and $\mathbb R^n$, respectively, such that $$EXP(Z)=0$$ for all polynomials $P$ on $\mathbb R^n$. Since the set of all such polynomials is dense in $L^1(K)$ for any compact $K\subset\mathbb R^n$, whereas $X$ and $Z$ are bounded, we conclude that $EXh(Z)=0$ for any $h\in L^1(K)$. So, $EX\,1(Z\in B)=0$ for any Borel set $B\subseteq\mathbb R^n$, which means $E(X|Z)=0$, as desired.