Unbiased Estimator for $\sigma^2$ in $N(0,\sigma^2)$

855 Views Asked by At

Consider $N(\mu,\sigma^2)$ and pick independent $X_1, \cdots, X_n$.

It is a well known fact that the MLE estimator $$ S_n^2= \frac 1 n\sum(X_i - \overline X_n)^2 $$ is a biased estimator for $\sigma^2$.

If we suppose $\mu= 0$, then I get it is unbiased. Am I wrong? Thanks!

3

There are 3 best solutions below

0
On BEST ANSWER

If you somehow know the value of $\mu,$ regardless of whether it's $0$ or not, then you should use that instead of the estimate $\bar X_n,$ and in that case the MLE of $\sigma^2$ is $\displaystyle \frac 1 n \sum_{i=1}^n (X_i-\mu)^2,$ and that is unbiased. To see that it's unbiased, just observe that $\operatorname E((X_i-\mu)^2) = \sigma^2.$

But $\displaystyle \frac 1 n \sum_{i=1}^n (X_i-\bar X)^2 $ still has expected value $\frac{n-1} n \sigma^2.$ If $\mu$ is known, then that estimator is still biased but is not the MLE.

To say that $\mu$ is “known” simply means the family of distributions you're working with is $\left\{ N(\mu,\sigma^2): \sigma^2>0 \right\}$ rather than $\left\{ N(\mu,\sigma^2) : \sigma^2>0\ \&\ \mu\in\mathbb R \right\}.$ The likelihood function is then a function of $\sigma$ or of $\sigma^2$ rather than of two variables, one of which is $\mu.$

0
On

Note that if we know the mean $\mu$ then $$E[S_n^2] = \frac{1}{n}\sum_{i=1}^{n}E[(X_i-\mu)^2]=\sigma^2.$$ That is, $S_n^2$ is an unbiased estimate of $\sigma^2$.

On the other hand, if we estimate the mean by using the sample average and compute $$S_n^2 = \frac{1}{n}\sum_{i=1}^{n}(X_i-\bar{X})^2,$$ then it no longer remains unbiased.

3
On

If $\mu = 0$ then $\sum_{i=1}^n X_i^2/\sigma^2 \sim \chi^2(n)$, hence $$ \mathbb{E}(\frac{1}{n}\sum_{i=1}^n X_i^2) = \mathbb{E}(\frac{\sigma^2}{n} \sum_{i=1}^n X_i^2/\sigma^2) = \sigma ^ 2\frac{n}{n} = \sigma^2. $$ This is true for any other (true) value of $\mu$. Where if you estimate $\mu$ with $\bar{X}$ then $\sum_{i=1}(X_i - \bar{X})^2/\sigma^2 \sim \chi^2_{(n-1)}$ because $$ \frac{\sum_{i=1}(X_i - \bar{X})^2}{\sigma^2} = \frac{\sum_{i=1}(X_i - \mu)^2}{\sigma^2} - \frac{n(\bar{X} - \mu)^2}{\sigma^2} = \chi^2_{(n)} - \chi^2_{(1)} = \chi^2_{(n-1)}, $$ where the last equality holds because in normal distribution, $\sum_{i=1}^n (X_i - \mu)^2$ is independent of $\bar{X}_n$.
As such, dividing by $n$ instead of the actual degrees of freedom $(n-1)$ will result in a bias.