Test of confidence intervals?

42 Views Asked by At

In one of my assignments I have to "test" if the confidence intervals for a set of parameters in a mixed effect model is accurate. I'm asked to simulate from fittet parameters and there after refit them using the same model many times, and lastly take 2.5% and 97.5% quantiles of them and compare with the original CIs. My question is, how does this procedure in anyway measure how accurate my original confidence intervals are?

1

There are 1 best solutions below

2
On BEST ANSWER

Each time you simulate data from fitted parameters you find another estimate of the parameter. If the original 95% CI is valid, about 95% of these new estimates ought to lie in the original CI.

You must be studying, or about to study, parametric bootstrapping. There are so many different formulations of this idea that I hesitate to get into a theoretical discussion, without knowing the particulars of your course and text, for fear of causing additional confusion.

Take a very simple case. I have a sample of $n = 36$ observations from $Norm(\mu, \sigma=10).$ Suppose $\bar X = 105.9,$ so that a 95% z-interval for $\mu$ is $106.9 \pm 1.96(10/6)$ or $(103.6, 110.2)$

Now I take a large number, say $B = 100000,$ of samples of size 36 from $Norm(106.9, 10)$ and use R to carry out the procedure you describe.

 B = 10^5; mu.est = 106.9; sg = 10; n = 36
 RDTA = matrix(rnorm(B*n, mu.est, sg),nrow=B)
 x.bar = rowMeans(RDTA)  # each row of B x n matrix is a sample
 quantile(x.bar, c(.025, .975))
 ##      2.5%    97.5% 
 ##  103.6111 110.1773 

The result is not far from the original CI $(103.6, 110,2).$ In this trivial case, the agreement is not surprising because we are just re-establishing by simulation that $\bar X \sim Norm(\mu, \sigma/\sqrt{n}).$

In more complex cases, modifications must be made in the procedure, especially when dealing with distributions and estimators that have heavily skewed distributions.