Is there a mathematical basis for the idea that this interpretation of confidence intervals is incorrect, or is it just frequentist philosophy?

1.6k Views Asked by At

Suppose the mean time it takes all workers in a particular city to get to work is estimated as $21$. A $95\%$ confident interval is calculated to be $(18.3, 23.7).$

According to this website, the following statement is incorrect:

There is a $95\%$ chance that the mean time it takes all workers in this city to get to work is between $18.3$ and $23.7$ minutes.

Indeed, a lot websites echo a similar sentiment. This one, for example, says:

It is not quite correct to ask about the probability that the interval contains the population mean. It either does or it doesn't.

The meta-concept at work seems to be the idea that population parameters cannot be random, only the data we obtain about them can be random (related). This doesn't sit right with me, because I tend to think of probability as being fundamentally about our certainty that the world is a certain way. Also, if I understand correctly, there's really no mathematical basis for the notion that probabilities only apply to data and not parameters; in particular, this seems to be a manifestation of the frequentist/bayesianism debate.

Question. If the above comments are correct, then it would seem that the kinds of statements made on the aforementioned websites shouldn't be taken too seriously. To make a stronger claim, I'm under the impression that if an exam grader were to mark a student down for the aforementioned "incorrect" interpretation of confidence intervals, my impression is that this would be inappropriate (this hasn't happened to me; it's a hypothetical).

In any event, based on the underlying mathematics, are these fair comments I'm making, or is there something I'm missing?

4

There are 4 best solutions below

2
On BEST ANSWER

Confidence intervals within the frequentist paradigm: You are correct that these assertions (warning against interpreting the confidence interval as a probability interval for the parameter) come from the fact that confidence intervals arise in the classical frequentist method, and in that context, the parameter is considered a fixed "unknown constant", not a random variable. There is a relevant probability statement pertaining to the confidence interval, which is:

$$\mathbb{P}(L(\mathbf{X}) \leqslant \mu \leqslant U(\mathbf{X}) \mid \mu) = 1-\alpha,$$

where $L(\mathbf{X})$ and $U(\mathbf{X})$ are bounds formed as functions of the sample data $\mathbf{X}$ (usually via the use of rearrangement of a probability statement about a pivotal quantity). Importantly, the data vector $\mathbf{X}$ is the random variable in this probability statement, and the parameter $\mu$ is treated as a fixed "unknown constant". (I have indicated this by putting it as a conditioning variable, but within the frequentist paradigm you wouldn't even specify this; it would just be implicit.) The confidence interval is derived from this probability statement by taking the observed sample data $\mathbf{x}$ to yield the fixed interval $\text{CI}(1-\alpha) = [ L(\mathbf{x}), U(\mathbf{x}) ]$.

The reason for the assertions you are reading is that once you replace the random sample data $\mathbf{X}$ with the observed sample data $\mathbf{x}$, you can no longer make the probability statement analogous to the above. Since the data and parameters are both constants, you now have the trivial statement:

$$\mathbb{P}(L(\mathbf{x}) \leqslant \mu \leqslant U(\mathbf{x})) = \begin{cases} 0 & \text{if } \mu \notin \text{CI}(1-\alpha), \\[6pt] 1 & \text{if } \mu \in \text{CI}(1-\alpha). \end{cases}$$


Confidence intervals within the Bayesian paradigm: If you would prefer to interpret the unknown parameter $\mu$ as a random variable, you are now undertaking a Bayesian treatment of the problem. Although the confidence interval is a procedure formulated within the classical paradigm, it is possible to interpret it within the context of Bayesian analysis.

However, even within the Bayesian context, it is still not valid to assert a posteriori that the CI contains the true parameter with the specified probability. In fact, this posterior probability depends on the prior distribution for the parameter. To see this, we observe that:

$$\mathbb{P}(L(\mathbf{x}) \leqslant \mu \leqslant U(\mathbf{x}) \mid \mathbf{x}) = \int \limits_{L(\mathbf{x})}^{U(\mathbf{x})} \pi(\mu | \mathbf{x}) d\mu = \frac{\int_{L(\mathbf{x})}^{U(\mathbf{x})} L_\mathbf{x}(\mu) \pi(\mu)d\mu}{\int L_\mathbf{x}(\mu) \pi(\mu) d\mu}.$$

This posterior probability depends on the prior, and is not generally equal to $1-\alpha$ (though it may be in some special cases). The initial probability statement used in the confidence interval imposes a restriction on the sampling distribution, which constrains the likelihood function, but it still allows us freedom to choose different priors, yielding different posterior probabilities for the correctness of the interval.

(Note: It is easy to show that $\mathbb{P}(L(\mathbf{X}) \leqslant \mu \leqslant U(\mathbf{X})) = 1-\alpha$ using the law-of-total probability, but this is a prior probability, not a posterior probability, since it does not condition on the data. Thus, within the Bayesian paradigm, we may say a priori that the confidence interval will contain the parameter with the specified probability, but we cannot generally say this a posteriori.)

8
On

Here is how it appears in Larry Wasserman's All of Statistics

Warning ! There is much confusion about how to interpret a confidence interval. A confidence interval is not a probability statement about $\theta$ (parameter of the problem), since $\theta$ is a fixed quantity, not a random variable. Some texts interpret as follows: If I repeat over and over, the interval will contain the parameter 95 percent of the time. This is correct but useless since you rarely the same experiment over and over. A better interpretation is this: On day 1 you collect data and construct a 95 percent confidence interval for a parameter $\theta_1$. On day 2, you collect new data and construct a 95 percent confidence interval for an unrelated parameter $\theta_2$. [...] You continue this way constructing confidence intervals for a sequence of unrelated parameters $\theta_1, \theta_2, \dots$. Then 95 percent of your intervals will trap the true parameter value. There is no need to introduce the idea of repeating the same experiment over and over.

4
On

The very technical definition of a 95% confidence interval implies that if we were to repeat the experiment 100 times, 95 of those times our confidence interval would have the true population mean.

So saying that a confidence interval is the probability that the true population mean lies in it does not seem intuitively wrong, but technically the probabilities might work out differently depending on how we state it.

The definition relies upon the true population mean being a constant and our intervals kinda guessing it, whereas the usual, technically incorrect, intuitive answer depends upon our confidence interval being constant and our true mean shifting around randomly.

I hope this sheds some light on this topic.

11
On

We need to distinguish between two claims here:

  1. Population parameters cannot be random, only the data we obtain about them can be random.
  2. Interpreting confidence intervals as containing a parameter with a certain probability is wrong.

The first is a sweeping statement that you correctly describe as frequentist philosophy (in some cases “dogma” would seem more appropriate) and that you don't need to subscribe to if you find a subjectivist interpretation of probabilities to be interesting, useful or perhaps even true. (I certainly find it at least useful and interesting.)

The second statement, however, is true. Confidence intervals are inherently frequentist animals; they're constructed with the goal that no matter what the value of the unknown parameter is, for any fixed value you have the same prescribed probability of constructing a confidence interval that contains the “true” value of the parameter. You can't construct them according to this frequentist prescription and then reinterpret them in a subjectivist way; that leads to a statement that's false not because it doesn't follow frequentist dogma but because it wasn't derived for a subjective probability. A Bayesian approach leads to a different interval, which is rightly given a different name, a credible interval.

An instructive example is afforded by the confidence intervals for the unknown rate parameter of a Poisson process with known background noise rate. In this case, there are values of the data for which it is certain that the confidence interval does not contain the “true” parameter. This is not an error in the construction of the intervals; they have to be constructed like that to allow them to be interpreted in a frequentist manner. Interpreting such a confidence interval in a subjectivist manner would result in nonsense. The Bayesian credible intervals, on the other hand, always have a certain probability of containing the “random” parameter.

I read a nice exposition of this example recently but I can't find it right now – I'll post it if I find it again, but for now I think this paper is also a useful introduction. (Example $11$ on p. $20$ is particularly amusing.)