Hypothesis Testing under Uniform Distribution Question

861 Views Asked by At

The question reads:

Let $\theta > 0$ and $X \sim \mathcal{U}[0, \theta]$, i.e. $X$ is uniformly distributed on the interval $[0, \theta]$.

Assume that $\theta$ is unknown, but we can observe $X$. For given $\theta_1$, we want to test the hypothesis H0: $\theta \geq \theta_1$ against the alternative H1 : $\theta < \theta_1$. Consider the test which rejects H0, if and only if $X < c$. How should we choose $c$, as a function of $\theta_1$ and $\alpha$, to get a test with confidence level $\alpha$? Carefully justify your answer.

I struggle to understand how to approach this question/ how to carry it out? Any help would be much appreciated as this question is due in today. Thanks!!

1

There are 1 best solutions below

0
On BEST ANSWER

The significance level $\alpha$ corresponds to the maximum Type I error you are willing to accept for the test; that is to say, the erroneous conclusion to reject the null hypothesis when it is true: $$\Pr[\text{reject } H_0 \mid H_0 \text{ true}] \le \alpha.$$ Since you are already told that the criterion to reject $H_0$ is if $X < c$, and you are also told that the null hypothesis is $H_0 : \theta \ge \theta_1$, this becomes $$\Pr[X < c \mid \theta \ge \theta_1] \le \alpha.$$

Questions you should ask yourself:

  1. For a fixed $\theta$, what is the conditional probability $\Pr[X < c \mid \theta]$?
  2. As $\theta$ increases, does $\Pr[X < c \mid \theta]$ increase, or decrease, for a fixed $c$?
  3. How does this inform the choice of $\theta$ under $H_0$, and $c$ as a function of $\theta_1$ and $\alpha$, to ensure a level $\alpha$ test?