Conditional probability and a normal distribution

94 Views Asked by At

Apologies as I have never studied statistics at a high enough level to be sure that I am using some vocabulary correctly.

Let's suppose I draw some $\mu_{1} \dots \mu_{n}$ from a normal distribution $N(0,1)$. I do not see the values of $\mu_{k}$. They are hidden and fixed.

And I consider normal distributions $N(\mu_{k} , \sigma)$ where $\sigma$ is a universal constant.

The observational data I have is I make a draw a number of observations $X_{i,k} \sim N(\mu_{k} , \sigma)$ and am given observational data of the form $X_{i,k} > X_{i,k'}$.

So it makes sense to talk about $\mu_{k}$ as having a conditional probability distribution because I will have some observational data which constrains the values I can expect $\mu_{k}$ to take on. I will take $E[\mu_{k}]$ to mean the expected value of $\mu_{k}$ with respect to this probability distribution.

How do I go about computing $E[\mu_{k}]$?

For instance, I tried working out the case $n = 2$ and one observation $X_{1,1} > X_{1,2}$ as follows:

The probability that $X_{i,1} > X_{i,2}$ is $\frac{1}{2} + \frac{1}{2} \text{erf}\left( \frac{\mu_1 - \mu_2}{2\sigma} \right)$ (which follows since the 2-D normal distribution is rotationally symmetrical).

So the expected value of $\mu_1-\mu_2$ is:

$$\int_{\mathbb R} ( \mu_1 - \mu_2 ) \left( \frac{1}{2} + \frac{1}{2} \text{erf}\left( \frac{\mu_1 - \mu_2}{2\sigma} \right)\right) \exp(-(\mu_1-\mu_2)^2/4) d ( \mu_1 - \mu_2 )$$

Which after some integration by parts reduces to something like $2(1+\sigma^{2})^{-1/2}$ (although I probably made an error in multiplication).

I figure this is a general enough problem that I should be able to look for a solution in a textbook. Is there a good resource (i.e. a textbook) that I can look to for this?