I'm not quite sure that I understand how the generalized likelihood ratio test works for composite hypotheses; observe the example below:
Let $X_1,...,X_n$ be a random sample from an exponential distribution, $X_i\sim EXP(\theta) \implies E(X_i)=\theta$. Derive the generalized likelihood ratio test of $H_0:\theta=\theta_0$ vs. $H_a: \theta>\theta_0$.
I've been able to to a good portion of the work; we know that, in this case, $\bar{X}$ is the maximum likelihood estimator of $\theta$. But here's where I'm confused. Suppose instead we had that $H_0: \theta=\theta_0$ vs. $H_a:\theta\ne\theta_0$. If this were true instead, we would have that the likelihood ratio is given by: $$\lambda(\vec{X})=\frac{\bar{x}^n e^{-n\bar{x}/\theta_0+n}}{\theta_0^n}$$ And then we would reject the null hypothesis if this value was less than some constant $c$. However, considering that a composite hypothesis is given instead, I believe that the decision rule needs to change somehow; the problem is that I don't understand how. The textbook lists the following as the final answer to the original question:
Reject $H_0$ if $2n\bar{x}/\theta_0 \ge \chi^2_{1-\alpha}(2n))$
where $\chi^2_{1-\alpha}(2n)$ represents the percentile function of the chi square distribution with $2n$ degrees of freedom. Can somebody show how to work to this solution; I don't know how they get there because this is a composite hypothesis and I don't understand what needs to be changed.
Using https://en.wikipedia.org/wiki/Likelihood-ratio_test#Composite_hypotheses, $$ {\displaystyle \Lambda (X)={\frac {\sup\{\,{\mathcal {L}}(\theta \mid X):\theta \in \Theta _{0}\,\}}{\sup\{\,{\mathcal {L}}(\theta \mid X):\theta \in \Theta \,\}}}} = \begin{cases} \frac{e^{-n\overline{x}/\theta_0}/\theta_0^n}{e^{-n \overline{x}/\overline{x}}/ (\overline{x})^n} = \frac{\bar{x}^n e^{-n\bar{x}/\theta_0+n}}{\theta_0^n} & \overline{X} > \theta_0\\[10pt] \frac{e^{-n\overline{x}/\theta_0}/\theta_0^n}{e^{-n \overline{x}/\theta_0}/ \theta_0^n} =1 & \text{else} \end{cases} $$
We reject $H_0$ in favor of $H_a$ if this is sufficiently small. Only the first case is interesting.
After doing this, you will see that we reject null hypothesis for $\overline{X}$ sufficiently large, or equivalently, for $\sum_{i=1}^n X_i$ sufficiently large.
The result follows after observing that, under the null hypothesis, $\sum_{i=1}^n X_i \sim \Gamma(n,\theta_0)$ and therefore $ \frac{2}{\theta_0} \sum_{i=1}^n X_i\sim \chi^2(2n)$ using properties of Gamma distribution.