Understanding likelihood ratio in Neyman-Pearson Lemma

900 Views Asked by At

I'm trying to understand the likelihood ratio in the Neyman-Pearson Lemma. More precisely:

$\frac{L_A}{L_N} \ge K$, for some constant K, where $L_A$ is the likelihood function under the alternative hypothesis, and $L_N$ is the likelihood function under the null hypothesis.

My understanding is that we want consider observations in the rejection region when they fall far under the alternative area, but I don't really get how the likelihood ratio determines that.

I don't really get how we find the K through the inequality:

$P((X_1,...,X_n) \in W_K|H_O) \le \alpha$,

where $W_K$ is a critical region dependent on K, $H_O$ is the null hypothesis and $\alpha$ is the significance level.

Does this inequality mean that we are looking for a K for which observations fall far under the alternative likelihood area, but in the same time the K being smaller than the chosen at first significance level $\alpha$? So in the end we are not going to reject with $\alpha$ but with a K which is beyond $\alpha$ and satisfies the first inequality?

1

There are 1 best solutions below

2
On BEST ANSWER

If you are trying to test $H_0 : \theta = \theta_0$ against $H_1 : \theta = \theta_1$ for some unknown parameter $\theta$, you will reject $H_0$ if $$ r(\theta_0,\theta_1|X) = \frac{L(\theta_0|X)}{L(\theta_1|X)} \leq K_{\alpha} \in (0,1]$$ since given a dataset $X$, if $H_0$ is false then it is expected that the above ratio would be smaller than $1.$ So to find $K_{\alpha}$ for a particular significance level $\alpha$, you find the value which satisfies $$ \alpha = P(r(\theta_0,\theta_1|X) \leq K_{\alpha} | \theta = \theta_0 ) \hspace{10pt} K_{\alpha} \in (0,1] $$ $K$ is then the value for which the probability of finding a likelihood ratio which is more extreme (smaller in this case), given that $H_0$ is true , is equal to $\alpha$, i.e. the rejection region is $(0,K_{\alpha})$

Example

Testing for the mean of a normal distribution using the hypotheses above, then $$ r(\theta_0,\theta_1|X) = exp \left\{ \frac{n(\theta_0-\theta_1)}{\sigma^2}\overline{x} + \frac{n(\theta_1^2-\theta_0^2)}{\sigma^2} \right\} $$ What you are trying to do is to find $K$ such that the probability of $r$ being less than $K$ is $\alpha$ and then you reject $H_0$ if $r \leq K.$ Let's say $\alpha=0.05$ and $\theta_1 > \theta_0$ then \begin{align} 0.05 &= P\left (exp \left\{ \frac{n(\theta_0-\theta_1)}{\sigma^2}\overline{x} + \frac{n(\theta_1^2-\theta_0^2)}{\sigma^2} \right\} \leq K \right ) \\ &= P\left ( \frac{n(\theta_0-\theta_1)}{\sigma^2}\overline{x} + \frac{n(\theta_1^2-\theta_0^2)}{\sigma^2} \leq \ln{K} \right ) \\ &= P\left ( \frac{n(\theta_0-\theta_1)}{\sigma^2}\overline{x} \leq \ln{K} -\frac{n(\theta_1^2-\theta_0^2)}{\sigma^2} \right ) \\ &= P\left ( \overline{x} \geq \left\{\ln{K} -\frac{n(\theta_1^2-\theta_0^2)}{\sigma^2} \right \}\frac{\sigma^2}{n(\theta_0-\theta_1)}\right) \\ \end{align} So since you are assuming that $\theta = \theta_0$ , you have $\overline{x}\sim N(\theta_0,\frac{\sigma^2}{n})$ and you can modify the equation above in order to use the standard normal tables. From there it should be straight forward to determine $K$.