Generalized Likelihood Ratio Tests and Composite Hypotheses

555 Views Asked by At

I'm not quite sure that I understand how the generalized likelihood ratio test works for composite hypotheses; observe the example below:

Let $X_1,...,X_n$ be a random sample from an exponential distribution, $X_i\sim EXP(\theta) \implies E(X_i)=\theta$. Derive the generalized likelihood ratio test of $H_0:\theta=\theta_0$ vs. $H_a: \theta>\theta_0$.

I've been able to to a good portion of the work; we know that, in this case, $\bar{X}$ is the maximum likelihood estimator of $\theta$. But here's where I'm confused. Suppose instead we had that $H_0: \theta=\theta_0$ vs. $H_a:\theta\ne\theta_0$. If this were true instead, we would have that the likelihood ratio is given by: $$\lambda(\vec{X})=\frac{\bar{x}^n e^{-n\bar{x}/\theta_0+n}}{\theta_0^n}$$ And then we would reject the null hypothesis if this value was less than some constant $c$. However, considering that a composite hypothesis is given instead, I believe that the decision rule needs to change somehow; the problem is that I don't understand how. The textbook lists the following as the final answer to the original question:

Reject $H_0$ if $2n\bar{x}/\theta_0 \ge \chi^2_{1-\alpha}(2n))$

where $\chi^2_{1-\alpha}(2n)$ represents the percentile function of the chi square distribution with $2n$ degrees of freedom. Can somebody show how to work to this solution; I don't know how they get there because this is a composite hypothesis and I don't understand what needs to be changed.

2

There are 2 best solutions below

1
On

Using https://en.wikipedia.org/wiki/Likelihood-ratio_test#Composite_hypotheses, $$ {\displaystyle \Lambda (X)={\frac {\sup\{\,{\mathcal {L}}(\theta \mid X):\theta \in \Theta _{0}\,\}}{\sup\{\,{\mathcal {L}}(\theta \mid X):\theta \in \Theta \,\}}}} = \begin{cases} \frac{e^{-n\overline{x}/\theta_0}/\theta_0^n}{e^{-n \overline{x}/\overline{x}}/ (\overline{x})^n} = \frac{\bar{x}^n e^{-n\bar{x}/\theta_0+n}}{\theta_0^n} & \overline{X} > \theta_0\\[10pt] \frac{e^{-n\overline{x}/\theta_0}/\theta_0^n}{e^{-n \overline{x}/\theta_0}/ \theta_0^n} =1 & \text{else} \end{cases} $$

We reject $H_0$ in favor of $H_a$ if this is sufficiently small. Only the first case is interesting.

"And then we would reject the null hypothesis if this value was less than some constant c"

After doing this, you will see that we reject null hypothesis for $\overline{X}$ sufficiently large, or equivalently, for $\sum_{i=1}^n X_i$ sufficiently large.

The result follows after observing that, under the null hypothesis, $\sum_{i=1}^n X_i \sim \Gamma(n,\theta_0)$ and therefore $ \frac{2}{\theta_0} \sum_{i=1}^n X_i\sim \chi^2(2n)$ using properties of Gamma distribution.

2
On

We are assessing the following hypothesis: \begin{align*} H_0: \theta \le \theta_0 && H_1: \theta = \theta_1 > \theta_0 \end{align*} In this case, the domains of the null and alternative parameters are: \begin{align*} \Theta_0 = (-\infty, \theta_0] && \Theta_1 = (\theta_0, \infty) \end{align*} Let us first take the case where $\theta_0 > \hat{\theta}_{MLE}$. Taking a generic likelihood function, the domain, $\Theta_0$, over which we are performing the restricted MLE includes $\hat{\theta}_{MLE}$.

An image of the generic likelihood function, its MLE over the whole domain and the selection of theta naught greater than theta MLE

The blue dotted line is $\theta_0$ and the black dotted line is $\hat{\theta}_{MLE}$. This example graph makes it clear that the domain $\Theta_0$ (shaded in green) includes the MLE when $\theta_0>\hat{\theta}_{MLE}$. Therefore, the restricted MLE $\theta_0=\hat{\theta}_{MLE}$ when $\theta_0 > \hat{\theta}_{MLE}$.

Next, we consider the case when $\theta_0 \le \hat{\theta}_{MLE}$. In this case, we can redraw the graph to again help us visualize:

An image where we have theta naught less than theta MLE

Now we can see that the restricted maximum likelihood as given by the shaded green region must be $\theta_0$ given that it is the largest value of the likelihood. There is the special case that $\theta_0= \hat{\theta}_{MLE}$, in which case this rule will still hold.

Therefore, our likelihood ratio test statistic is: \begin{align*} \lambda(\mathbf{x})=\frac{L(\hat{\theta}_0)}{L(\hat{\theta}_{MLE})}=\begin{cases} \frac{L(\hat{\theta}_{MLE})}{L(\hat{\theta}_{MLE})}=1, &\hat{\theta}_{MLE} \le \theta_0\\[10pt] \frac{L(\hat{\theta}_0)}{L(\hat{\theta}_{MLE})}, &\hat{\theta}_{MLE} > \theta_0 \end{cases} \end{align*}

This video also gives an excellent explanation of the composite hypothesis issue: https://www.youtube.com/watch?v=9RvLy554NnI.