Suppose we have the following general setup where we wish to test the hypothesis, $$ H_0 : \theta\in\Theta_0 ~~~~~~~~~~~~~~~~~~~~~~~~~H_1 : \theta\not\in\Theta_0. $$ It seems to be pretty well established that under some regularity conditions that a likelihood ratio test, $$ \Lambda\left(\mathbf{X}\right) = \frac{\sup_{\theta\in\Theta_0} \mathcal{L}\left(\theta;\mathbf{X}\right)}{\sup_{\theta\in\Theta} \mathcal{L}\left(\theta;\mathbf{X}\right)} $$ has the property that $-2\log\Lambda\left(\mathbf{X}\right)\sim\chi^2_k$, where $k = \dim\left(\Theta\right) - \dim\left(\Theta_0\right)$ asymptotically. I don't understand why $\Lambda\left(\mathbf{X}\right) \to 1$ in the limit of infinite data when the null hypothesis is true.
Allow me to explain. We know that maximum likelihood is asymptotically consistent such that $$ \lim_{n\to\infty} \Pr\left[\theta = \hat{\theta}\right] = 1. $$ Therefore, if the null is true (i.e. $\theta\in\Theta_0$) it should also be the case that with probability one $\hat{\theta} \in\Theta_0$. By definition, $\hat{\theta}$ maximizes the denominator of the likelihood ratio, but because $\hat{\theta}\in\Theta_0$, it is clear that $\hat{\theta}$ also maximizes the numerator. Additionally, because they are maximized by the same point, the numerator and denominator are equal so that, $$ \Lambda\left(\mathbf{X}\right) = \frac{\mathcal{L}\left(\hat{\theta};\mathbf{X}\right)}{\mathcal{L}\left(\hat{\theta};\mathbf{X}\right)} = 1. $$ Moreover, $-2\log\Lambda\left(\mathbf{X}\right)=0$.
I know there is a problem with this reasoning, but I don't know what it is. It seems that the correction must have to do with the additional $k$ free parameters that were assumed to be present in the unrestricted parameter space $\Theta$.
Let's take a counter-example of sampling a normal distribution $n$ times, where it has known variance $1$
Suppose the null hypothesis is that $\mu=0$ and this is in fact true
We will not be at all surprised when the mean of the sample is at least one standard error away from the population mean, i.e. $\pm\frac{1}{\sqrt{n}}$, and we expect results more extreme than this over $30\%$ of the time, no matter how large $n$ is
But in this or more extreme cases your likelihood ratio will be no more than $e^{-1/2}\lt 0.607$, no matter how large $n$ is. Thus the likelihood ratio will not converge to $1$ as $n$ increases
A more general answer to your original question is that small random effects consistent with the null hypothesis can still have substantial effects on the likelihood ratio when $n$ is large