Karlin-Rubin theorem, normal distribution

574 Views Asked by At

Let $X_1,...,X_n \sim N(\mu, \sigma^2)$, where $\mu$ is known and $\sigma^2$ is unknown. We have a pair of hypotheses: $ \begin{cases} H_0: \sigma^2\leq a \\ H_1: \sigma^2 > a \end{cases}$

So I use Karlin-Rubin theorem: $\phi(x)= \begin{cases} 1& \text{if $T(x) \geq k$ } \\ 0& \text{if $T(x) < k$ } \end{cases}$

My PDF is: $f(x)= \exp[\frac{-1}{2\sigma^2}(x-\mu)^2 -\frac{1}{2}\ln2\pi\sigma^2]$

Of course it is an exponential family, with $C(\sigma^2)=-\frac{1}{2\sigma^2}$ which is increasing, so my $T(X)=\sum(X_i-\mu)^2$. Here comes the struggle: I've see someone do it in this fashion:

$\phi(x)= \begin{cases} 1& \text{if $\sum(X_i-\mu)^2 \leq k$ } \\ 0& \text{if $\sum(X_i-\mu)^2 > k$ } \end{cases}$

so the inequalities are reversed. Here comes the question: why?

EDIT Another question is about $k$. We use a condition that $E_a[ϕ(X)]=α$ to obtain $k$ and thus (at some point) we set our $σ^2$ to be equal to $a$. We do it because if the condtion is fulfilled for $σ^2=a$ then is fulfilled also for $σ^2>a$? So, to put it in the simple way, we reject the null hypothesis the "more" the greater $σ^2$ is than $a$?

An example: in this case $\Bbb P(\sum(\frac{X_i -\mu}{\sigma})^2\leq\frac{k}{\sigma^2})=1-\alpha$ as $\sum(\frac{X_i -\mu}{\sigma})^2 \sim \mathcal X_n^2$ so:

$\mathcal X_{n,1-\alpha}^2=\frac{k}{\sigma^2}$

$k=\mathcal X_{n,1-\alpha}^2a^2$

Why do we put $a$ up there? Is it because: $ \begin{cases} H_0: \sigma^2\leq a \\ H_1: \sigma^2 > a \end{cases} \equiv \begin{cases} H_0: \sigma^2= a \\ H_1: \sigma^2 > a \end{cases}$?

1

There are 1 best solutions below

0
On BEST ANSWER

Just a different point of view:

Karlin-Rubin Theorem is just an extension of Neyman-Pearson's Fundamental Lemma.

So, if a sufficient estimator exist, the lemma is

$$\frac{L(\mathbf{x},\theta_0)}{ L(\mathbf{x},\theta_1)}=\frac{h(\mathbf{x})g(t(\mathbf{x};\theta_0)}{h(\mathbf{x})g(t(\mathbf{x};\theta_1) }$$

then it is self evident that the UMP test is based on the sufficient estimator $T=t(\mathbf{x})$.

If the model belongs to the exponential family (that is the case), in order to define the critical region $C$, you can use this useful theorem taken by Mood Graybill Boes:

enter image description here

Similar theorem (on the same book) exist for model not belonging to the exponential family but with a monotone likelihood ratio.

In both these situations, UMP test exist and the solution is immediate.