Most powerful test for discrete uniform

1.9k Views Asked by At

Let $X$ be a random sample from a discrete distribution with the probability mass function $f(x, \theta) =\frac{1}{\theta} , x=1,2,...,\theta;= 0 \ \text{otherwise} $ where $\theta \ \text {is either 20 or 40} $ is the unknown parameter. Consider testing $H_{0}: \theta = 40$ against $H_{1}: \theta = 20$ Find the uniformly most powerful level $\alpha=0.1$ test for testing $H_{0}$ vs $H_{1}$

I am new to construction of MP tests, and I was trying to use Neyman Pearson Lemma to construct the test but however the ratio is meaningless here as the support of the two distributions are different in the two hypotheses, so how will I tackle this problem?

2

There are 2 best solutions below

7
On

We have the distribution of a single observation $X$ :

\begin{align} f_{\theta}(x)&=\frac{1}{\theta}\mathbf1_{x\in\{1,2,\ldots,\theta\}}\quad,\,\theta\in\{20,40\} \end{align}

By NP lemma, an MP test of level $\alpha$ for testing $H_0:\theta=40$ against $H_1:\theta=20$ is of the form

\begin{align} \varphi(x)&=\begin{cases}1&,\text{ if }\lambda(x)>k\\\gamma&,\text{ if }\lambda(x)=k\\0&,\text{ if }\lambda(x)<k\end{cases} \end{align}

, where $$\lambda(x)=\frac{f_{H_1}(x)}{f_{H_0}(x)}$$

and $\gamma\in[0,1]$ and $k(> 0)$ are so chosen that $$E_{H_0}\,\varphi(X)\leqslant 0.1$$

Now,

\begin{align} \lambda(x)&=2\frac{\mathbf1_{x\in\{1,2,\ldots,20\}}}{\mathbf1_{x\in\{1,2,\ldots,40\}}} \\\\&=\begin{cases}2&,\text{ if }x=1,2,\ldots,20 \\0&,\text{ if }x=21,22,\ldots,40 \end{cases} \end{align}

Therefore, for some $c$, $$\lambda(x)\gtrless k\implies x\lessgtr c$$

And the level restriction gives $$P_{H_0}(X<c)+\gamma P_{H_0}(X=c)\leqslant 0.1\tag{1}$$

Taking different values of $c$ (namely $c=2,3,4,5$) and finding the corresponding tail probability $P_{H_0}(X<c)$ subject to $(1)$, I end up with $$c=4\quad,\quad \gamma=1$$

So the required test is $$\varphi(x)=\mathbf1_{x\leqslant 4}$$

This is UMP because $\varphi$ obviously does not depend on the value of $\theta$ under $H_1$.

0
On

The domains of the two distributions are not really different. In a hypothesis-testing scenario, the domains can't be different, otherwise the hypothesis testing doesn't really make sense - if the experiment is producing outputs in an entirely different domain depending on the two hypotheses, then surely it must be pretty easy to tell which hypothesis is true.

You can imagine both of these distributions as being on $\{1, ..., 40\}$, or on $\mathbb N$ or even $\mathbb R$ - any measurable space which contains the supports of the two distributions, really. In any case, the likelihood ratio of the null distribution to the alternative distribution comes out to be $\frac 1 2$ on $\{1, ..., 20\}$ and $0$ everywhere else. Notice that if you take the common domain of the distributions $H_0$ and $H_1$ to be, say, $\mathbb N$, then the ratio is undefined for most of $\mathbb N$ since you'd be dividing by zero. But it's well-defined $H_0$-almost everywhere, which is all that matters for the Neyman-Pearson lemma.