My professor was trying to explain something to me about confidence interval and I haven't been able to understand it.
There is a statement I think is true that she says is false. I can't understand why it is false.
The situation is that the (107.8, 116.2) is a 95% confidence interval for a mean statistic.
The statement is:
There is a 95% probability that the interval from 107.8 to 116.2 contains μ
My statistics professor says that the statement is false because the probability is either 0 or 1.
However, I am fairly sure that the statement is true considering that the definition of probability is:
the extent to which an event is likely to occur, measured by the ratio of the favorable cases to the whole number of cases possible.
My statistics professor has tried to explain to me her point but I have not understood it yet.
We did both agree that the following statement is true
This interval was constructed using a method that produces intervals that capture the true mean in 95% of all possible samples
I am fairly sure this statement says exactly the same thing as the first statement does. What am I missing?
The typical interpretation of confidence intervals goes something like this:
Let's say I have a statistic $\xi$ and I have decided upon an experiment to create a confidence interval. A 95% CI means that if I repeated the experiment over and over and over (infinitely many times) 95% of the time the CI I generated would contain $\xi$. Now, $\xi$ is a fixed number. So for any fixed interval, the interval either contains or does not contain $\xi$. This is what your teacher means.
Does this make sense, or should I give it another go?
----EDIT-----
Additionally, if we have a 95% confidence interval, and $\xi$ is outside the interval, we can interpret the sampling event (the collection of data used to make the CI) to have had a probability of 5% (or less). In this sense, it is the sampling event that was "rare."