MLE estimator in hypothesis test when sample size is too small

102 Views Asked by At

The MLE estimator has an asymptotic normal distribution, $$\sqrt{n}(\hat{\theta} - \theta_0) \rightarrow N\left(0,\frac{1}{I(\theta_0)}\right)$$ I want to perform a hypothesis test on a MLE estimator. The null hypothesis is $\theta_0=0$. and the alternative hypothesis is $\theta_0 \neq 0$.

I first calculate $t=\frac{\hat{\theta}}{nI(\theta_0)}$, and $t\rightarrow Z$. Then I can compute the p-value. However, the sample size is too small (n=5). I wonder how reliable the p-value is in this way? Is there another way that can compute more reliable p-value?

1

There are 1 best solutions below

0
On

If the test statistic deviates from normality for small $n$, then depending on its distribution under $H_0$, the resulting $p$-value may be smaller than the true $p$-value; i.e., it underestimates the Type I error. A familiar example is the Student $t$ distribution, which arises when the variance of the sampling distribution is estimated from normally distributed data.

For small sample sizes, a non-parametric test or exact test may be more appropriate in the sense of ensuring control of Type I error, but the power of the test may be quite low. For example, for $n = 5$ a test of the binomial proportion hypothesis $$H_0 : p = 1/2 \quad \text{vs.} \quad H_a : p \ne 1/2$$ using the test statistic $\hat p = X/n$, the rejection region $X \in \{0, 5\}$ has a Type I error of $1/16 = 0.0625$, meaning that this test is incapable of rejecting $H_0$ at a significance level of $\alpha < 0.0625$ due to the small sample size. But it should be pointed out that this deficiency is not rectified by a test statistic using the normal approximation; it is intrinsic to the discrete nature of the data. I mention it here because, not knowing whether the parametric model for your data is continuous or discrete, it is important to see that any test statistic constructed on a small sample could fail to have the desired Type I error control.