Power and size relation in Hypothesis testing.

247 Views Asked by At

Let $1−\beta$ be the power of an MP size $\alpha$ test, where $0 <\alpha< 1$. Show that $$\alpha < 1−\beta$$ unless $P(\theta_0) = P(\theta_1)$.

My approach:

$$\alpha = P_{\theta_0}(x \in R) \tag{i}$$ $$\beta = P_{\theta_1}(x \in A) = 1 - P_{\theta_1}(x \in R)$$ $$1 - \beta = P_{\theta_1}(x \in R) \tag{ii}$$

where $R$ is the critical region and $A$ is the acceptance region.

Since it's a Most Powerful test, $\beta$ will be minimum hence $1-\beta$ will be maximum.

How can I proceed from here?

1

There are 1 best solutions below

0
On

This is a corollary of the necessity part of Neyman-Pearson lemma as discussed here on pages 3-4.

Suppose we want to test a simple null $H_0:\theta=\theta_0$ against a simple alternative $H_1:\theta=\theta_1 (\ne \theta_0)$ based on a sample $X$ with probability distribution $P_\theta$.

Let $P_\theta$ have density $f_\theta$ with respect to some dominating measure $\mu$.

Let $\phi$ be any size $\alpha$ test such that $$\phi(x)=\begin{cases}1&,\text{ if }f_{\theta_1}(x)>k\,f_{\theta_0}(x) \\ 0 &,\text{ if }f_{\theta_1}(x)<k\,f_{\theta_0}(x)\end{cases} $$

where $k>0$ and $\phi$ on the boundary $\{f_{\theta_1}=k\,f_{\theta_0}\}$ is such that $E_{\theta_0}\phi(X)=\alpha$

Suppose $1-\beta$ is the power of any most powerful size $\alpha$ test for testing $H_0$ against $H_1$ where $\alpha\in(0,1)$. Assume $1-\beta<1$.

Consider a trivial test $\phi^*(x)=\alpha$ for every $x$, so that $$E_{\theta_0}\phi^*(X)=E_{\theta_1}\phi^*(X)=\alpha$$

Since $1-\beta$ is the power of an MP size $\alpha$ test, for every $\theta_1\ne \theta_0$, $$1-\beta\ge E_{\theta_1}\phi^*(X)\implies 1-\beta\ge \alpha \tag{$\star$}$$

[This shows that an MP test is necessarily unbiased (i.e. its power is at least its size).]

Strict inequality holds in $(\star)$ unless $1-\beta= E_{\theta_1}\phi^*(X)$. That is to say an MP test with power $1-\beta$ is strictly unbiased unless $\phi^*$ is MP size $\alpha$. But if $\phi^*$ is MP size $\alpha$, necessity part of NP lemma says that $\phi^*=\phi$ almost everywhere (a.e.) $\mu$ on $\{f_{\theta_1}\ne k\,f_{\theta_0}\}$.

As $\phi^*= \phi$ never happens, this means $$\mu\left(\left\{f_{\theta_1}\ne k\,f_{\theta_0}\right\}\right)=0$$

In other words, $$f_{\theta_1}=k\,f_{\theta_0} \,\text{ a.e. }\mu$$

This further implies $$1=\int f_{\theta_1}\,d\mu=k\int f_{\theta_0}\,d\mu=k$$

Hence $1-\beta>\alpha$ unless $f_{\theta_0}=f_{\theta_1}$ a.e. $\mu$, i.e. unless $P_{\theta_0}=P_{\theta_1}$.