Distribution of mean of Normal distribution

350 Views Asked by At

Suppose $X\sim N(\mu,\sigma)$. I want to find the following probability $P[\mu \ge \theta |x= \theta -c]$ for $c>0$.

In another word, I saw a sample of Normal distribution, $x$, and know that it is smaller than $\theta$. Now I want to know what is the probability that the mean of distribution is larger than $\theta$.

Attempt 1 I suppose, $\mu$ is a random variable and now with the observation $x=\theta-c$, I want to find $P[\mu\ge \theta]$

$$P[\mu \ge \theta |x= \theta -c]= \frac{P[x=\theta-c|\mu\ge\theta]\times P[\mu\ge\theta]}{P[x=\theta -c]}$$ $$=\frac{\int_\theta^\infty P[x=\theta-c|\mu=t] \times P[\mu=t] dt}{P[x=\theta -c]}$$

Now the only thing I know is that $P[x\le \theta -c|\mu=t]=\Theta (\theta -c,N(t,1)$. I don't have prior information about $P[\mu\ge\theta]$ and say I can assume any distribution that makes the analysis easy.

Attempt 2 I don't have any information about $P[\mu\ge \theta]$ or $P[x=\theta-c]$, so is it acceptable to assume they are uniform on their space?

If so I can do the following

$$P[\mu \ge \theta |x= \theta -c]=P[x=\theta-c|\mu\ge\theta]\\ =\int_\theta^\infty \phi(\mu=t,\sigma,\theta -c) dt =\int_\theta^\infty\frac{1}{\sigma\sqrt{2\pi}}e^{\frac{(t-\theta+c)^2}{2\sigma}}\\ =\int_\theta^\infty\frac{1}{\sigma\sqrt{2\pi}}e^{\frac{(-t+\theta-c)^2}{2\sigma}} =\int_\theta^\infty \phi(\mu=\theta,\sigma,t+c) dt\\ =\Theta(\mu=\theta,\sigma,\theta+c)$$

which is just CDF of $N(\theta,\sigma)$ for $\theta-c$.

2

There are 2 best solutions below

1
On

Well, it is just $1/2$. Because the observation is either above or below the mean of the distribution. You need to specify more information. Also, your problem is ill-defined, because the mean is not a random variable. I assumed that you want to know what is the probability that you observe an $x$ that satisfies the inequality.

4
On

You said you can assume any prior on $\mu$ that makes the problem easy. Easiest one to assume then is $\mu \sim N(\alpha, \beta^2)$ where $\alpha,\beta$ are known so $X = \mu + N(0, \sigma^2)$ with the $N(0,\sigma^2)$ independent of $\mu$. Then, $X$ and $\mu$ are jointly gaussian, and the conditional distribution of $\mu$ given $X$ is a Gaussian with mean as the linear MMSE estimate of $\mu$ given $X$, and its variance is the covariance of the error in the linear MMSE estimator.

The linear MMSE estimator of $X$ given $Y$ is $E[X] + cov(X,Y) cov(Y,Y)^{-1} (Y - E[Y])$ covariance of the error given by $cov(X) - cov(X,Y) cov(Y,Y)^{-1} cov(Y,X)$.

In this case, you need $E[X] = E[E[X|\mu]] = E[E[\mu + N(0,\sigma^2)|\mu]] = E[\mu + E[N(0,\sigma^2)|\mu]] = E[\mu + 0] = \alpha$, $cov(\mu, \mu) = \beta^2$, $cov(X) = cov(\mu) + cov(N(0,\sigma^2)) = \beta^2 + \sigma^2$ and $cov(X,\mu) = cov(\mu + N(0,\sigma^2), \mu) = cov(\mu) + cov(N(0,\sigma^2)) = \beta^2 + 0 = \beta^2$ and similarly, $cov(\mu,X) = \beta^2$.

This specifies the distribution of $\mu | X$ in this case, and from that, you can calculate all the desired probabilities.