Are the statements about the confidence interval correct?

807 Views Asked by At

We have a 90%-confidence interval. I want to check if the following statements are correct.

  1. If double the sample, the possibility that the value that we are looking for is out of the confidence interval is smaller.

  2. The bigger the standard error, the smaller the confidence interval.

Since the confidence interval is $\left (\overline{x}- Z_{a/2}\cdot s_x, \overline{x}+ Z_{a/2}\cdot s_x\right )$, where $s_x$ is the standard error, I think that the second statement is wrong and it should be that the bigger the standard error, the bigger the confidence interval. Is this correct?

What about the first statement?

2

There are 2 best solutions below

6
On BEST ANSWER

The first is a bit tricky. It's hard to figure out what the probability that the true value falls within the interval is, and notably this number is not $0.90$. The correct interpretation of the confidence level is that if we were to do this experiment $100$ times, at least $90$ of them would capture the true value. This is not the same thing as saying that the probability of this interval containing the true value is at least $0.90$. This is a subtle, but important, point.

Intervals that satisfy the statement "this interval has a $\geq 90$% chance of containing the correct value" are called Bayesian Confidence Intervals or Credible Intervals and are calculated differently. I'm not sure if the answer is in fact true or false (I'm leaning false in general, and true for "nicely behaved" distributions) but the main takeaway is that it's a bad question.

Discussion of this distinction with a worked out example can be found here.

The second statement is wrong for the reason you stated.

1
On

To be precise in your question, I assume the "value you are looking for" is the mean of the population. And there is a hidden assumption that the underlying population is distributed according to some distribution that has a finite mean and finite variance. (It does not have to be a Gaussian, as long as the sample size is fairly large.)

As pointed out in the comments, the second statement is wrong: The confidence interval for any given fixed $\alpha$ will increase proportionally to standard error.

The first statement is wrong as well, for a subtle reason: By definition, the confidence interval is an interval such that the probability of the actual (mean) value being out of the interval is equal to $1-\alpha$. As you increase the sample size, the range of the confidence interval indeed very likely gets smaller. But whatever interval you get, the probability of the actual value lying outside the interval is still the same $1-\alpha$. You ahve more information, but you have decreased the interval exactly enough that the increased information neither increases or decreases the likelihood of that error.