GLRT statistic for composite normal hypothesis, two unknowns

237 Views Asked by At

Suppose $X_1...X_n$ ~ iid ~ $\mathcal N(\mu, \sigma)$, both parameters unknown. We want to test $H_0: \mu \leq \mu_0, H_1: \mu > \mu_0$. Show that the LRT (likelihood ratio test) statistic is given by $$\lambda(\textbf{x}) = \begin{cases} 1,\bar{X} \leq \mu_0 \\ \left(\frac{\hat{\sigma^2}}{\hat{\sigma^2_0}}\right)^{\frac{n}{2}} , \bar{X} > \mu_0 \end{cases}$$ Where $$\begin{cases} \hat{\sigma^2} = \frac{1}{n}\sum_{i=1}^n (x_i - \bar{X})^2 \\ \hat{\sigma^2_0} = \frac{1}{n}\sum_{i=1}^n (x_i - \mu_0)^2 \end{cases}$$

Note that the LRT statistic for a composite hypothesis ($H_0$ does not completely determine the distribution, say our $H_0$ vs $H_0: \mu = \mu_0$) is defined as $$\lambda(\textbf{x}) = \frac{\max_{\theta \in \Theta_0} L(\theta;\textbf{x})}{\max_{\theta \in \Theta} L(\theta;\textbf{x})}$$ where $ \Theta_0 = \{(\mu, \sigma): \mu \leq \mu_0, \sigma > 0\}, \Theta = \{(\mu, \sigma) : \mu \in (-\infty, \infty), \sigma > 0\} $

I am having trouble calculating the numerator of the LRT statistic, specifically dealing with the condition $\mu \leq \mu_0$. If we were dealing with a simple hypothesis, say $H_0: \mu = \mu_0$, we can simply plug in $\mu_0$ into the likelihood function of normal iid random variables. Here, under $H_0$, $\mu$ is a range of parameters. What do I do?

I am thinking that if I can put a global bound on the likelihood function w.r.t $\mu$ under $H_0$, then the likelihood becomes a function of one parameter $\sigma$, from which I can then take derivatives or whatever to maximize the likelihood. Is that the right way to go about this? If I can just figure the numerator out, I know how to solve the problem from there.

1

There are 1 best solutions below

1
On BEST ANSWER

Look at log-likelihood function $$ \ln L(\theta, \mathbf X) = -n\ln \sigma - \frac{1}{2\sigma^2}\sum_{i=1}^n (X_i-\mu)^2. $$ Concerning the denominator: if we will find global maximum over $(\mu,\sigma)\in\Theta$, we get that it is attained at the point $\hat\mu=\overline X$, $\hat\sigma^2=\frac{1}{n}\sum_{i=1}^n (X_i-\overline X)^2$.

Return to numerator. If we work inside $\Theta_0$ and if $\overline X \leq \mu_0$, then $(\hat\mu,\hat\sigma^2)\in\Theta_0$ and then likelihood ratio equals to $1$.

If $\overline X > \mu_0$, look at the derivatives of log-likelihood function at any point $\mu\leq \mu_0 <\overline X$, $\sigma>0$: $$ \frac{\partial}{\partial \mu}\ln L(\theta, \mathbf X) = \frac{n}{\sigma^2}\left( \overline X-\mu\right)>0 $$ so the log-likelihood function increases in $\mu$ irrespective to $\sigma$, so for any fixed $\sigma>0$ $$ \max_{\mu\leq \mu_0} L(\mu,\sigma, \mathbf X) = L(\mu_0,\sigma, \mathbf X). $$ Then we can take $\mu=\mu_0$ and find maximum over $\sigma$: $$ \frac{\partial}{\partial \sigma}\ln L(\mu_0,\sigma, \mathbf X) = 0 \iff \sigma^2=\hat\sigma_0^2=\frac1n\sum_{i=1}^n (X_i-\mu_0)^2. $$