Is it me or is there something seriously flawed with this question?
Here is their question:
An analyst draws 100 random samples of 16 observations from a normally distributed population with a variance of 1. One sample has a standard deviation of 1.11 and an upper limit of the confidence interval equal to 0.52. Assuming 10 samples do not contain the population mean, the corresponding mean is closest to?
THEIR answer:
First, the standard deviation is known; it is equal to the square root of the variance, or 1. Thus, the population standard deviation should be used to compute the confidence interval, not the sample standard deviation. Second, if 10 out of 100 samples do not contain the population mean, this implies a 90% confidence interval. For a 90% confidence interval, 1.645 is the appropriate reliability factor. Recall the confidence interval is given as follow [....]
And they go on to compute the mean based on the standard equation using the 90% confidence interval to back up a sample mean of X 0.10875
I don't get it. What does the fact that 10 samples do not contain the mean have to do with the confidence interval.
To me this sounds like, we flipped a coin 20 times, and 10 times was heads, and saying OK so the 10others must be tails...
Do you not agree that if 100 confidence intervals were constructed at confidence level 90%, about 90 of them would contain the true mean? You’re given that 10 do not contain the true mean. So there you have it: 90% is a reasonable estimate of the confidence level, and then you work backward to get the mean from the right boundary:
$$\bar x=.52 - 1.645\frac 1 {\sqrt {16}}=1.0875$$