Understanding the proof that the conditional distribution of a Gaussian random variable is linear in the conditioning Gaussian

227 Views Asked by At

Let $X: \Omega \to \mathbb{R}$, and $Y:\Omega \to \mathbb{R}^d$ be random variables such that $(X,Y)$ is a Gaussian random vector.

Then there exists $a,b_1, \dots, b_k \in \mathbb{R}$ such that $$P(X\in dx| Y=y)=N(a+\sum b_k y_k, \sigma^2)$$ where $Z \sim N(0,\sigma^2)$ is a Gaussian independent of Y from a previous theorem.

The proof below is from Rene Schilling. I have two questions regarding the proof.

First, how do we get $E(g(Y)e^{i\xi (a+\sum b_k Y_k + Z}))=\int g(y) E(e^{i\xi (a+\sum b_k y_k + Z}))P(Y\in dy)$ from the fact that $Z $ and $Y$ are independent?

Second, I cannot understand the final identity. Why does the identity $E(g(Y) \int e^{i\xi x} P(X\in dx|Y) = \int g(y) E(e^{i\xi (a+\sum b_k y_k + Z))}) P(Y\in dy)$ for any bounded measurable $g$ imply that $\int e^{i\xi x} P(X\in dx|Y=y) = E(e^{i\xi (a+\sum b_k y_k + Z)})$?

I would greatly appreciate a rigorous explanation to these details.

enter image description here

1

There are 1 best solutions below

0
On

The identity in your first question follows from the fact that $$ \mathbb{E}[f(X)g(Y)] = \mathbb{E}[f(X)]\cdot\mathbb{E}[g(Y)] $$ which can be seen by conditioning on one of the variables (tower property) and then pulling out the independent factor (or the measurable factor).

In your context this means \begin{align} \mathbb{E}[g(Y)e^{i\xi(a+\sum b_kY_k})e^{i\xi Z}] &= \mathbb{E}[\mathbb{E}[g(Y)e^{i\xi(a+\sum b_kY_k)}e^{i\xi Z} \mid Z]] \\ &= \mathbb{E}[e^{i\xi Z}\mathbb{E}[g(Y)e^{i\xi(a+\sum b_kY_k)}\mid Z]] \\ &= \mathbb{E}[e^{i\xi Z}]\mathbb{E}[g(Y)e^{i\xi(a+\sum b_kY_k)}] \\ &= \mathbb{E}[e^{i\xi Z}] \int g(y)e^{i\xi(a+\sum b_ky_k)}\mathbb{P}(Y\in\mathrm{d}y) \\ &= \int g(y)\mathbb{E}[e^{i\xi(a+\sum b_ky_k)}e^{i\xi Z}]\mathbb{P}(Y\in\mathrm{d}y) \end{align}

The final identity (from your second question) follows from the fact that $$ \int f(x)g(x)\mu(\mathrm{d}x) = 0 \quad\forall g \implies f \equiv0 \quad \mu\text{-a.e.} $$ This is sometimes called the fundamental lemma of the calculus of variations. Intuitively you can choose $g$ positive in a region where $f$ is non-zero and then show that the integral on the left hand side can't be zero.