The following is a problem in my book that I don't really understand:
We take a random sample: $x_1,x_2,\ldots,x_n$ from a population that is $N(μ,σ)$ where $\mu$ and $\sigma$ are unknown.
We build two estimates:
$$\mu^*_{\text{obs}} = \overline{x} = (x_1 + x_2 + \cdots + x_n)/n$$
and
$$\hat{\mu}^*_{\text{obs}} = (x_1+x_2)/2$$
Show that both estimates are unbiased.
I know that an estimate of a sample mean is unbiased when we divide by $n-1$ instead of $n$. How come those two estimates are unbiased? In my eyes they are biased.
It follows by the linearity of expectation: $$ E[\mu^*_{\text{obs}}]=\frac{1}{n}\left(E[x_1]+\cdots+E[x_n]\right)=\frac{1}{n}\left(\mu+\cdots+\mu\right)=\frac{1}{n}n\mu=\mu $$ and hence $\mu^*_{\text{obs}}$ is unbiased for $\mu$. The same applies for $\hat{\mu}^*_{\text{obs}}$, either by direct computation just as above, or by noting that is it in fact $\mu^*_{\text{obs}}$ based on a random sample of size $n=2$.
What you mention about dividing by $n-1$ instead of $n$ applies to the sample variance, i.e. $$ s^2=\frac{1}{n-1}\sum_{i=1}^n (x_i-\bar{x})^2 $$ is unbiased for $\sigma^2$ (I take it $N(\mu,\sigma)$ means that $\sigma$ is the standard deviation).