Sum of two shifted normal random variables

64 Views Asked by At

Suppose a random variable Z has the following density: $$ g(x) = \frac{1}{2} \phi(x+1) + \frac{1}{2} \phi(x-1) $$ where $\phi(x)$ is the pdf of a standard normal random variable.

Prove that $Z$ has variance of $2$.

I can see that $Z$ can be thought of the sum of two normal random variables $N(-1,1)$ and $N(1,1)$ and so the mean of Z will be $0$. But how to calculate its variance? I guess that we need to calculate the covariance of the two components, but I am not sure how.

3

There are 3 best solutions below

0
On BEST ANSWER

I will let you check that $EZ=0$ and show you how to compute $EZ^{2}$.

$$EZ^{2}=\int x^{2}g(x)dx=\frac 1 2\int x^{2}\phi (x+1)dx+\frac 1 2\int x^{2}\phi (x-1)dx$$ $$=\frac 1 2\int (y-1)^{2}\phi (y)dy+\frac 1 2\int (y+1)^{2}\phi (y)dy.$$ You can easily compute this by expanding the squares.

Recall that $\int y\phi (y)dy=0, \int \phi (y)dy=1$ and $\int y^{2}\phi (y)dy=1$.

0
On

What you write is not quite true -- $Z$ is not the sum of two normal random variables (this would lead to a convolution of pdfs, not a sum), it is a mixture.

Specifically, if $B\sim$Bernoulli$(1/2)$, $X\sim\mathcal{N}(-1,1)$, , $Y\sim\mathcal{N}(1,1)$ are 3 independent random variables, then $Z$ has the distribution of $$ B\cdot X + (1-B) \cdot Y $$ from which $$\mathbb{E}[Z] = \mathbb{E}[B\cdot X] + \mathbb{E}[(1-B) \cdot Y] = \mathbb{E}[B]\cdot \mathbb{E}[X] + \mathbb{E}[1-B] \cdot \mathbb{E}[Y] = \frac{1}{2}\cdot (-1)+\frac{1}{2}\cdot 1 = 0$$ and $$\begin{align*}\mathbb{E}[Z^2] &= \mathbb{E}[B^2\cdot X^2+(1-B)^2 \cdot Y^2 + 2B(1-B)XY] = \mathbb{E}[B\cdot X^2+(1-B) \cdot Y^2]\\ &= \frac{1}{2}\mathbb{E}[X^2]+\frac{1}{2}\mathbb{E}[Y^2] = \frac{1}{2}+\frac{1}{2}=1 \end{align*}$$ using that $B^2 = B$, $(1-B)^2=1-B$, and $B(1-B)=0$ (since $B\in\{0,1\}$).

0
On

The variable $Z$ has what is called a mixture density: the density of $Z$ is some weighted sum of other densities. But this is not the same as saying that $Z$ is the sum of other random variables. In particular, your claim that $Z$ is the sum of two other normal random variables, is incorrect. We can see this because the density of $Z$ is $$f_Z(z) = \frac{1}{2 \sqrt{2\pi}} \left( e^{-(z-1)^2/2} + e^{-(z+1)^2/2} \right).$$ But if $X_1$ is normally distributed with mean $-1$ and $X_2$ normally distributed with mean $1$, and are independent with variance $1$, then $$Y = X_1 + X_2 \sim \operatorname{Normal}(\mu = 0, \sigma^2 = 2),$$ that is, their sum is normally distributed with mean $0$ and variance $2$, hence has density $$f_Y(y) = \frac{1}{2\sqrt{2\pi}} e^{-x^2/4} \ne f_Z(z).$$ To calculate the variance of $Z$, we note that for an equally weighted mixture of two densities, say $$f_Z(z) = \frac{1}{2}\left(f_{X_1}(z) + f_{X_2}(z)\right),$$ we have for each positive integer $k$, $$\operatorname{E}[Z^K] = \frac{1}{2} \int_{z \in \Omega} z^k f_{X_1}(z) + z^k f_{X_2}(z) \, dz = \frac{1}{2} \left(\operatorname{E}[X_1^k] + \operatorname{E}[X_2^k]\right);$$ that is, the $k^{\rm th}$ moment is equal to the mean of the individual moments of $X_1$ and $X_2$. So the variance is $$\begin{align} \operatorname{Var}[Z] &= \operatorname{E}[Z^2] - \operatorname{E}[Z]^2 \\ &= \frac{1}{2}(\operatorname{E}[X_1^2] + \operatorname{E}[X_2^2]) - \left(\frac{\operatorname{E}[X_1] + \operatorname{E}[X_2]}{2}\right)^2 \\ &= \frac{1}{2}(\operatorname{Var}[X_1] + \operatorname{Var}[X_2] + \operatorname{E}[X_1]^2 + \operatorname{E}[X_2]^2) - \frac{(\operatorname{E}[X_1] + \operatorname{E}[X_2])^2}{4} \\ &= \frac{\operatorname{Var}[X_1] + \operatorname{Var}[X_2]}{2} + \frac{(\operatorname{E}[X_1] - \operatorname{E}[X_2])^2}{4}. \end{align}$$ If $X_1$ and $X_2$ have means $\mu_1, \mu_2$ and variances $\sigma_1^2, \sigma_2^2$, respectively, then this can be written as $$\operatorname{Var}[Z] = \frac{\sigma_1^2 + \sigma_2^2}{2} + \frac{(\mu_1 - \mu_2)^2}{4}.$$ Note the above does not rely on any assumptions about the normality of $X_1$ or $X_2$. It applies whenever the means and variances exist.