Finding standarad error estimation/ Confidence interval of gamma parameter (say alpha) using central limit theorem

171 Views Asked by At

Suppose $\tilde \alpha$ is the MME of $\alpha$. Is there any way we can estimate the standard error or corresponding 95% CI of $\tilde\alpha$ using CLT. I have tried in this way [mean and variance are taken from standard gamma mean and variance]

$$\frac{\overline {X} - \mu}{\frac{\sigma}{\sqrt{n}}} \rightarrow^d N(0,1)$$ or $$\frac{\overline {X} - \frac{\alpha}{\beta}}{\frac{\frac{\sqrt{\alpha}}{\beta}}{\sqrt{n}}}\rightarrow^d N(0,1)$$ or $$\sqrt{n}(\overline {X} - \frac{\alpha}{\beta})\rightarrow^d N(0,\frac{\sqrt{\alpha}}{\beta}) $$

I can not get rid of $\beta$ from the mean and I am not sure if it is correct. Can someone please help?

1

There are 1 best solutions below

0
On BEST ANSWER

Assuming $\beta$ is known, and the shape parameter $\alpha$ is the only unknown parameter for the gamma distribution $\text{Gamma}(\alpha, \beta)$.

The method of moments has you set $\mu=\frac \alpha \beta$ thus $\alpha=g(\mu)=\beta\mu$. The method of moments estimator is $\hat \alpha=\beta \bar X$.

The delta method says that $\sigma^2_{\hat \alpha}\approx g'(\mu)^2\frac{\sigma^2}{n}$ and we have $g'(\mu)=\beta$. Thus $$\sigma^2_{\hat\alpha}\approx \beta^2\frac{\alpha}{\beta^2n}=\frac{\alpha}{n}$$

A $95\%$ confidence interval would then be given by $\beta\bar X \pm 1.96\times \sqrt{\frac{\beta \bar X}{n}}$.

alpha=c()
for (i in 1:10000) {
  s=rgamma(100, 5, 10)
  alpha[i]=10 * mean(s)
}
hist(alpha)
var(alpha)

s=rgamma(100, 5, 10)
a=10 * mean(s)
a/100

We can compare the variance from the asymptotic distribution a/100 for a single sample with the actual variance from simulating $10000$ times via var(alpha) and they are remarkably similar. Below we see that the method of moments estimators gives consistent estimates of $\alpha=5$ in this case.

enter image description here

Confirming that the $95%$ confidence intervals capture $\alpha=5$ about $95\%$ of the time:

ci=list()
for (i in 1:10000) {
  ci[[i]]=alpha[i] + c(-1, 1)*1.96*sqrt(alpha[i]/100)
}
truth=c()
for (i in 1:10000) {
  if (5 > ci[[i]][1] & 5 < ci[[i]][2]) {
    truth[i]=TRUE
  } else {
    truth[i]=FALSE
  }
}
mean(truth)

gives 0.9506. Sorry for a lot of code!