Question: Does the Neyman–Pearson lemma give instructions for how to construct the test when the outcome space is not monotonic?
I suspect the answer is NO, but I would like to:
- Get an affirmative answer that it is indeed NO.
- If you have an alternative lemma for such cases (instead of going through all of the possible rejection regions), I'd be curious to know about it.
An extended example for illustration:
Assume an observation from a multinomial distribution with options {a,b,c}, with the following two null hypothesis about the probabilities:
$H_0:p=(1/3, 1/6, 1/2)$
$H_1:p=(2/3, 2/6, 0)$
The likelihood ratio for each of the possible outcomes is:
$\lambda(a) = 2$
$\lambda(b) = 2$
$\lambda(c) = 0$
Which means we can build rejection regions using one of the following rules:
$\lambda > 3 => never\ reject => \alpha=0$
$\lambda > 1 => reject\ for\ a,b => \alpha=1/2$
$\lambda >= 0 => always\ reject => \alpha=1$
So I know that if I am interested in the most powerful test for $\alpha=1/2$, I know what the test is. But what if I want a test for $\alpha=1/3$? I can use either "reject if a", or "reject if b", and obviously the first one is more powerful ($\pi = 2/3$), but I can think of this rule by looking at my options, not through the NP lemma.