Unknown variance hypothesis testing for normal random variables

1.9k Views Asked by At

Let $X_1,...,X_n$ be iid random variables each with a $N(\mu_0,\sigma^2)$ distribution, where $\mu_0$ is known and $\sigma^2$ is unknown. Find the best (most powerful) test of size at most $\alpha$ for testing $H_0:\sigma^2=\sigma_0^2$ against $H_1:\sigma^2=\sigma_1^2$ for known $\sigma_0^2$ and $\sigma_1^2 > \sigma_0^2$.

Progress

These are simple hypotheses, so I should be able to use the Neyman-Pearson lemma to find the best test size $\leq \alpha$. However, when I carry out the calculation, I am finding it difficult to simply the likelihood ratio, or to show that it is strictly increasingl (so that I can, for example, just consider the distribution of $\bar X$. The likelihood ratio is

$$\Lambda_{\underline{x}}(H_0;H_1)=\left(\frac{\sigma_0}{\sigma_1}\right)^n \exp\left(\sum(x_i-\mu_0)^2 \left[\frac{1}{2\sigma_0^2}-\frac{1}{2\sigma_1^2}\right]\right)$$

I understand what the Neyman-Pearson lemma states, I just don't understand how we apply it to this particular problem, as it seems that the likelihood ratio I have is too complicated to use: I think there must be a way to distill the essential information from it, perhaps by using monotonicity, but I'm not sure how to implement this in practice.