Point and interval estimation of normal variance $\theta$ when the mean $\mu$ is known.

193 Views Asked by At

I'm looking at this problem with a given solution about constructing a confidence interval for $\sigma^2$.

Let $X_1, ..., X_n$ are independent observations over the random variable $X_i \sim N(\mu, \theta)$, where $\mu$ is known and $\theta > 0$. We are given the following estimator for $\theta$: $$ \hat\theta=\frac{\sum_{i=1}^n{(X_i-\mu)^2}}{\theta} $$

This estimator is used for constructing the interval, using the tables with for the $\chi^2$ distributions since the estimator is a random variable with $\chi^2$ distribution.

I get the idea behind this, but what I don't understand is where the estimator came from (looks like magically given to me), or more precisely how it is derived.

2

There are 2 best solutions below

0
On BEST ANSWER

If $\mu$ is known, then an unbiased estimator of the population variance $\theta = \sigma^2$ is given by $$\hat \theta = \frac{\sum_{i=1}^n (X_i - \mu)^2}{n}.$$ With this definition, we have $$\frac{n\hat\theta}{\theta} = \sum_{i-1}^n \left(\frac{X_i - \mu}{\sigma}\right)^2 = \sum_{i=1}^n Z_i^2 \sim \mathsf{Chisq}(df = n),$$ where $Z_i \stackrel{iid}{\sim}\mathsf{Norm}(0,1),$ and the last step is the definition of $\mathsf{Chisq}(n).$

With this distribution for $Q = n\hat\theta/\theta,$ one can find constants $L$ and $U$ such that $$P\left(L \le Q = \frac{n\hat\theta}{\theta} \le U\right) = 0.95.$$ Then by manipulating the inequalities, we have $P(n\hat\theta/U < \theta < n\hat\theta/L) = 0.95,$ so that a 95% confidence interval for $\theta$ is of the form $(n\hat\theta/U,\, n\hat\theta/L).$

Note: If $\mu$ is unknown and estimated by the sample mean $\bar X,$ then point and interval estimation of $\theta$ are slightly different. In this case, an unbiased estimator of $\theta = \sigma^2$ is $S^2 = \frac{\sum_{i=1}^n (X_i - \bar X)^2}{n-1}.$ Then $Q^\prime =\frac{(n-1)S^2}{\theta} \sim \mathsf{Chisq}(df = n-1)$ and a $95\%$ CI for $\theta$ is of the form $\left(\frac{(n-1)S^2}{U},\, \frac{(n-2)S^2}{L}\right),$ where $L$ and $U$ cut $2.5\%$ from the lower and upper tails (respectively) of $\mathsf{Chisq}(n-1).$ In this case, it is not quite so easy to prove that $Q^\prime \sim \mathsf{Chisq}(n-1).$

0
On

It is not magic, it is just the sample Variance if you change $\theta$ to $n$ in the question. There are different approaches to finding a desired estimator. One of them is the maximum likelihood estimator. If you try to estimate the variance of a normally distributed random variable via maximizing the likelihood function, you end up with the sample variance. As mentioned in the other answer there are two cases, mean is known or unknown. Have a look at this or for the full derivation better this one.