Uniformly Most Powerful Test for Unknown Variance of Normal Distribution

1k Views Asked by At

Mean of the Normal distribution is known.

Initial test hypotheses:

$H_0: \sigma^2 = \sigma^2_0$ $H_1: \sigma^2 = \sigma^2_1$

where $\sigma^2_0 < \sigma^2_1$

Since
$\sum_{i=1}^{n} (\frac{x_i-\mu}{\sigma_0})^2$ follows $\chi^2_n$

and given that significance level of the test is 0.05, the UMP test form is

"Reject $H_0$ if $\sum_{i=1}^{n} (\frac{x_i-\mu}{\sigma_0})^2 > $ $\chi^2_{n,0.05}$"

1st variant test hypotheses

$H_0: \sigma^2 = \sigma^2_0$ $H_1: \sigma^2 > \sigma^2_1$

The test form actually doesn't depend on $\sigma_1^2$, therefore, it is also the UMP rejection region for this set of hypotheses.

2nd variant test hypotheses

$H_0: \sigma^2 <= \sigma^2_0$ $H_1: \sigma^2 > \sigma^2_1$

Is our test form still UMP for the above set of hypotheses?

According to my lecture's materials, the power function is defined as

$Prob(Reject Null | \sigma)$.

In addition, if the power function is increasing with parameter vales within the null parameter set, the size of the test remains the same for all null parameter values in the set, and therefore the test form would also be UMP for the above. However, our test form seems to be a decreasing function with $\sigma_0^2.$ Which is unexpected as we're required to show the test form is also UMP for this new set of hypotheses. How to show that?

2

There are 2 best solutions below

6
On

First you have to point out that your model is Gaussian with KNOWN mean.

In this model, $S=\frac{\Sigma_i(x_i-\mu)^2}{n}$ is a sufficient statistic for $\sigma^2$ and thus also

$$T=\frac{\Sigma_i(x_i-\mu)^2}{\sigma_0^2} $$

is a sufficient statistic. Thus you can apply Karlin Rubin theorem. A proof of this theorem can be found on Casella Berger, theorem 8.3.17

Your second variant is equivalent to the first one because the test is unbiased

When you are in the interval $\sigma_2<\sigma_0^2$ you are not anymore in the power function because there you are under Null Hypotheses: you are calculating type I error.

In different words, remember that test size is defined as follows

$$\alpha=\sup_{\theta \in \Theta_0}\pi(\theta)$$

and also remember that the power function attains its minimum (in this situation) in

$$\lim\limits_{\theta \to \theta_0^+}\pi(\theta)=\alpha$$

$\pi(\theta)$ here is for Power Function

0
On

Let's have the following density

$$f_X(x)=\theta x^{\theta-1}$$

$0<x<1$ and $\theta>0$

Suppose we want to verify the following system of hypotheses

$$\begin{cases} H_0: & \theta \le 1 \\ H_1: & \theta > 1 \end{cases}$$

using a single observation from $f_X(x)$

Let's suppose that our test is the following: Reject $H_0$ iff $x\ge 0.5$

Type I error is:

$$\mathbb{P}[X\ge 0.5|\theta=1]=\int_{0.5}^1 dx=0.5$$

while the power function is

$$\pi(\theta)=\mathbb{P}[X\ge 0.5|\theta>1]=\int_{0.5}^1 \theta x^{\theta-1}dx=1-\frac{1}{2^\theta}$$

as you can see, $\pi(\theta)$ is strictly increasing all over $\Theta$ but when $\theta \in \Theta_0$ we are under Null hypotheses and, by definition, $\alpha$ is the maximum value that $\pi(\theta)$ attains and at the same time, $\alpha$ is the minimum value (inf, to be more precise) that $\pi(\theta)$ attains when we are in alternative parameter's space, $\Theta_1$

enter image description here