The usual proofs of the asymptotic distribution of the likelihood ratio test (LRT) being a chi-squared assume that the maximum likelihood (ML) estimators are consistent. Is it possible to find an asymptotic distribution for the LRT without the ML estimators being consistent?
2026-03-30 03:55:22.1774842922
Is it possible to find an asymptotic distribution for the likelihood ratio test without the maximum likelihood estimators being consistent?
265 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in PROBABILITY
- How to prove $\lim_{n \rightarrow\infty} e^{-n}\sum_{k=0}^{n}\frac{n^k}{k!} = \frac{1}{2}$?
- Is this a commonly known paradox?
- What's $P(A_1\cap A_2\cap A_3\cap A_4) $?
- Prove or disprove the following inequality
- Another application of the Central Limit Theorem
- Given is $2$ dimensional random variable $(X,Y)$ with table. Determine the correlation between $X$ and $Y$
- A random point $(a,b)$ is uniformly distributed in a unit square $K=[(u,v):0<u<1,0<v<1]$
- proving Kochen-Stone lemma...
- Solution Check. (Probability)
- Interpreting stationary distribution $P_{\infty}(X,V)$ of a random process
Related Questions in STATISTICAL-INFERENCE
- co-variance matrix of discrete multivariate random variable
- Question on completeness of sufficient statistic.
- Probability of tossing marbles,covariance
- Estimate the square root of the success probability of a Binomial Distribution.
- A consistent estimator for theta is?
- Using averages to measure the dispersion of data
- Confidence when inferring p in a binomial distribution
- A problem on Maximum likelihood estimator of $\theta$
- Derive unbiased estimator for $\theta$ when $X_i\sim f(x\mid\theta)=\frac{2x}{\theta^2}\mathbb{1}_{(0,\theta)}(x)$
- Show that $\max(X_1,\ldots,X_n)$ is a sufficient statistic.
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
Consistency means that the MLEs converge in probability to the true parameter. Therefore, there are at least two senses in which this can fail.
These are very different cases, so any theorem on the convergence of the LRT needs to add some definite structure. In the case of (1), you are out of luck. For (2) you can correct for bias and then you're back to the consistent case.
Response to OP Comment
First, this paper is not directly related to your question: It concerns the quasi-loglikelihood, not the log-likelihood, and it pertains to a test of hypothesis, not an estimator.
However, I think I see what your concern is, so I will try to add some of my thoughts to clarify what was said in that paragraph.
The consistency of a hypothesis test is different from the consistency of an estimator, although the two are related. For a hypothesis test to be consistent, the probability of a Type I or Type II error should go to 0 as $n \to \infty$ . This places much weaker constraints on the behavior of your test statistic under the alternative hypothesis. As long as the test statistic converges to a point outside the rejection region when the null if false (again as $n \to \infty$) then the test will correctly reject the null hypothesis in an asymptotic sense. As the authors note, the main effect of an inconsistent test statistics under the alternative hypothesis is a extra layer of uncertainty regarding the power of the test.
However, the Type I error probability is still known and is usually the main focus of a hypothesis test (sample size calculations should be addressed by direct simulation if possible).
This is the gist of that paragraph, the authors are pointing out that the hypothesis test is still valid even though the assumptions for the consistency of the estimator underlying the test are violated. This is due to the forgiving nature of hypothesis tests: they are binary/decision theoretic procedures that only require that you can identify when an outcome is rare, under the null hypothesis.