Is it possible to find an asymptotic distribution for the likelihood ratio test without the maximum likelihood estimators being consistent?

265 Views Asked by At

The usual proofs of the asymptotic distribution of the likelihood ratio test (LRT) being a chi-squared assume that the maximum likelihood (ML) estimators are consistent. Is it possible to find an asymptotic distribution for the LRT without the ML estimators being consistent?

1

There are 1 best solutions below

6
On BEST ANSWER

Consistency means that the MLEs converge in probability to the true parameter. Therefore, there are at least two senses in which this can fail.

  1. It does not converge: Think sample mean of a Cauchy random variable.
  2. Is converges to something else (i.e., it has asymptotic bias)

These are very different cases, so any theorem on the convergence of the LRT needs to add some definite structure. In the case of (1), you are out of luck. For (2) you can correct for bias and then you're back to the consistent case.

Response to OP Comment

First, this paper is not directly related to your question: It concerns the quasi-loglikelihood, not the log-likelihood, and it pertains to a test of hypothesis, not an estimator.

However, I think I see what your concern is, so I will try to add some of my thoughts to clarify what was said in that paragraph.

The consistency of a hypothesis test is different from the consistency of an estimator, although the two are related. For a hypothesis test to be consistent, the probability of a Type I or Type II error should go to 0 as $n \to \infty$ . This places much weaker constraints on the behavior of your test statistic under the alternative hypothesis. As long as the test statistic converges to a point outside the rejection region when the null if false (again as $n \to \infty$) then the test will correctly reject the null hypothesis in an asymptotic sense. As the authors note, the main effect of an inconsistent test statistics under the alternative hypothesis is a extra layer of uncertainty regarding the power of the test.

However, the Type I error probability is still known and is usually the main focus of a hypothesis test (sample size calculations should be addressed by direct simulation if possible).

This is the gist of that paragraph, the authors are pointing out that the hypothesis test is still valid even though the assumptions for the consistency of the estimator underlying the test are violated. This is due to the forgiving nature of hypothesis tests: they are binary/decision theoretic procedures that only require that you can identify when an outcome is rare, under the null hypothesis.