Conditional Expectation of A Normal R.V. on a Discrete Variable

49 Views Asked by At

Suppose a r.v. $\theta$ is distributed $N(\mu,\sigma^2)$ and there is a Bernoulli r.v. $s$ which takes value $1$ with probability $p(\theta)$ and takes value 0 with probability $1-p(\theta)$. I wonder how do we calculate $\mathbb{E}[\theta|s]? $

1

There are 1 best solutions below

0
On

This definitely calls for an application of Bayes' rule. Using Bayes' rule we can find the conditional pdf of $\theta | s$ as:

$$f(\theta \: | \: s=k) = \frac{\mathbb{P}(s=k \: | \: \theta)f(\theta) }{\mathbb{P}(s=k)},$$ where $k\in \{0,1\}$ and $f(\theta)=\frac{1}{\sqrt{2\pi}}e^{-(\theta-\mu)^2/(2\sigma^2)}$ is the pdf of a $N(\mu,\sigma^2)$ distribution. Since we know that $\int_\mathbb{R} f(\theta \: | \: s=k)\: d\theta =1 $, we can compute $$\mathbb{P}(s=k)= \int_\mathbb{R} \mathbb{P}(s=k \: | \: \theta)f(\theta) \:d\theta = \mathbb{E}[\mathbb{P}(s=k \: |\theta)] \quad \text{for $k\in\{0,1\}$}$$ Now we can find the conditional expectation as \begin{align*} \mathbb{E}[\theta \: | \: s=k] &= \int_\mathbb{R} \theta f(\theta \: | \: s=k) \: d\theta \\ &=\frac{1}{\mathbb{E}[\mathbb{P}(s=k \: | \: \theta)]}\int_\mathbb{R} \theta\mathbb{P}(s=k \: | \: \theta)f(\theta) \: d\theta \\ &=\frac{\mathbb{E}[\theta \mathbb{P}(s=k \: | \: \theta)]}{\mathbb{E}[ \mathbb{P}(s=k \: | \: \theta)]}\end{align*} Now by plugging in $p(\theta) = \mathbb{P}(s=1 \: | \: \theta)$, we get that $$\mathbb{E}[\theta \: | \: s=1] = \frac{\mathbb{E}[\theta p(\theta)] }{\mathbb{E}[p(\theta)] } \quad \text{ and } \quad \mathbb{E}[\theta \: | \: s=0] = \frac{\mathbb{E}[\theta(1- p(\theta))] }{\mathbb{E}[1-p(\theta)] }$$ and as a verification of the solution, we may check that the equation $$\mathbb{E}[\theta]=\mathbb{E}[\theta \: | \: s=1]\mathbb{P}(s=1) + \mathbb{E}[\theta \: | \: s=0]\mathbb{P}(s=0)$$ is indeed satisfied by the above.