I'm having some trouble understanding the following equality from my course book.
Some background:
Let $X_1,...,X_n$ be n independent Bernoulli r.v's with unknow parameter $\theta$. $X_1$ is an unibased estimator for $\theta$ and $T(X) = \sum_{i=1}^n X_i$ is a sufficent statistic.
Now to derive an estimator with a smaller variance one can use Rao-Blackwell's theorem and calculate the conditional expectation of the unbiased estimator and the sufficent statistic.
The following is written in my book which I do not understand.
$E_\theta (X_1|\sum_{i=1}^n X_i =t) = P_\theta (X_1 =1|\sum_{i=1}^n X_i = t)$
I tried to compute using some conditional expectation properties but I feel like the course book is skipping a lot of steps or that I might be missing something. Also, why is $X_1 = 1$ in the probability function?
I would appreciate if anyone could explain to me what is going on here.
Since $X_1$ bernoulli, it can only take on values 0,1. Explicitely write out the expectation. For computing the probability, you have to use a combinatorics argument. It's something like this: given that out of n coins, t are heads, there is a total of nCt ways this could happen. Now, say the first coin was a head. Then there are (n-1)C(t-1) ways to have t heads out of n coins with the first being head. Take ratio and you should get $\frac{t}{n}$.
This is reasonable because given that you've seen t heads out of n, your natural guess at the true probability of heads is $\frac{t}{n}$.