This is a question from Statistical Theory I have encountered. I have almost solved it, but have some trouble interpreting the solution. Something seems weird, and I am not sure whether I am entirely correct.
X is a random variable with distribution $\frac3{\theta^3} x^2 {1}_{(0,\theta)}$. We want to test the hypotheses: $H_0: \theta = 1$ vs. $H_1: \theta = 1.1$ with significance level $\alpha$ = $P(\text{reject }H_0| H_0 \text{ is true})$. We gather $n$ = 100 observations.
Okay, we perform the likelihood ratio test: $$ \Lambda = \frac{L(x_i,\theta_1)}{L(x_i\theta_0)} = \frac{(\frac{3}{1.1^3})^n(\Pi x_i)^2 1_{(0,1.1)}}{3^n(\Pi x_i)^2 1_{(0,1)}} = \begin{cases} c_n, & \text{if $\max x_i \leq 1$} \\ +\infty, & \text{if $\max x_i \geq 1$} \\ \end{cases} $$ Where $c_n = (\frac 1{1.1^{3n}}) = c_{100} = c$
I get trouble attempting to match the hypothesis level: the test should go like this: reject $H_0$ is $\Lambda$ > $C = C(\alpha)$.
If $C < c$ we always reject $H_0$. Therefore significance level is 1. if $C > c$ we reject the null hypothesis only if $\max x_i > 1$. But under $H_0$ this never happens, therefore $\alpha = 0$
The power of the test is $\pi = 1 - P(\text{do not reject } H_0| H_1 \text{ is true}) = \text{either } 1 \text{ or some other constant, which I have calculated to be ~~$1-0.75^n$, depending on $C >< c$}$
So, first of all, can somebody please verify whether I have made any significant mistakes? (I may have gotten some number wrong, but am I conceptually correct?)
How do I calculate $C = C(\alpha)?$ What is the precise significance and power of this test for, say $\alpha = 0.01$? The fact the the test is so "$\alpha$-independent" makes me suspicious.
OK, you found the ratio correctly. Now you need to construct the critical region in the form $\Omega = \{x:\Lambda(x) > C_{\alpha}\}$, where $C_{\alpha}$ corresponds to your significance level $\alpha$. Let's vary $C$ and look what we have. If $C=0$, then our critical region consists of all possible $x$, i.e. $\Omega_1 = [0, 1.1]$. In this case the probability of 1 type error equals 1. And we have the same result for all $C \in [0,c_n)$. If $C \ge c_n$, then our critical region consists only of $x$ with $\max x_i \ge 1$, i.e. $\Omega_2 = \{ x: \max x_i \ge 1 \} $. In this case the probability of 1 type error equals 0. So, we see, that neither $\Omega_1$ nor $\Omega_2$ are not critical regions what we are looking for. But we've got through all possible values $C$. In this case a randomization process can be applied: we consider the critical function $$\varphi \left( x \right) = \left\{ \begin{gathered} 1, \text{ for } x \in \Omega_2 \\ \alpha, \text{ for }x \in \Omega_1 \setminus \Omega_2 \\ \end{gathered} \right.$$ If $x\in \Omega_2$, then $H_0$ is rejected. If $x \in \Omega_1 \setminus \Omega_2$, then $H_0$ is rejected with probability $\frac{\alpha-\alpha_0}{p_0}$, where $\alpha_0 = \Pr(\Omega_2 | H_0)=0$ and $p_0 = \Pr(\Omega_2 \setminus \Omega_1 | H_0) - \Pr(\Omega_2|H_0)=1$. Note also that you accept $H_0$ with probability 1 on $\Omega_1 \setminus \Omega_1 = \varnothing$ as $\Omega_1$ contains all possible $x$ in our case. This criterion is proved to be optimal in the sense of minimizing the probability of 2 type error.
This situation is standard when you consider hypotheses with discrete (and in some cases with continuous) distributions.