Why is "exactly" a confidence interval?

66 Views Asked by At

I have troubles understanding the exact definition of a, say, 95% confidence interval. Supose we have a random sample of a normal population of size $n$ and we want to calculate a 95% confidence interval for the population mean, $\mu$. Also assume, for simplification, that we know the true population variance, $\sigma^2$. Let $\overline{X}$ the sample mean. The limits of the CI are:

Lower Bound = $\overline{X} -Z \frac{\sigma}{\sqrt{n}}$

Upper Bound= $\overline{X} +Z \frac{\sigma}{\sqrt{n}}$

If the CI is 95%, then the exact definition of a 95% confidence interval is:

$$P(\overline{X} -Z \frac{\sigma}{\sqrt{n}} < \mu < \overline{X} +Z \frac{\sigma}{\sqrt{n}}) = 0.95 $$

According to several bibliography, the correct interpretation of this interval is stated as follows:

If we took lots of samples of size $n$, then we will expect that in the 95% of the samples the interval will contain the true value of $\mu$, and therefore in the remain 5%, they will not contain $\mu$.

There is also another clarification of the concept:

The confidence is in the method, not in a particular CI. If we repeated the sampling method many times, approximately 95% of the intervals constructed would capture the true population mean

If the confidence is in the method, what does a particular confidence interval in a particular sample really mean? For example, let say I took a sample and got this interval for the population mean:

$$(80,100)$$

If i can't say that "there is a 95% probability that the interval (80,100) will contain the real parameter $\mu$" (because the sample has already been taken, therefore there are no longer random variables), how can I exactly interpret this two limits? I understand perfectly the limits as random variables, but not as realizations. What can I say about (80,100)?