Where can using two different test statistics in a hypothesis test lead you?

39 Views Asked by At

Do two test statistics at the same $\alpha$ value give the same TYPE 1 error rate and same decision? I think it is clear that they do give the same TYPE 1 error rate by definition, but do they always imply the same decision? In other words, is it possible for one test statistic to reject the null while the other fail to reject? If not, would any one please give me an elementary example?

2

There are 2 best solutions below

4
On

Of course not.

You are right that they give the same type I error because this is the "level of strictness" you select. An easy example is the following.

Let $X_1,\dots, X_n$ be all i.i.d. normal random variables with mean $\mu$ and known variance $\sigma^2$. We wish to test

$$H_0: \mu = \mu_0\quad vs. \quad H_a: \mu >\mu_0.$$

We consider to estimators for $\mu$. Namely, $\hat{\mu}_1= \overline{X}_n$ (the empirical mean) and $\hat{\mu}_1= X_i$, the $i$-th (fixed) observation, $1\leq i\leq n$.

Both estimators are unbiased. Of course you would choose $\hat{\mu}_1$ because it has less variance but this is not the issue now. The test statistics associated to each estimator would then be $$T_1 = \frac{\overline{X}_n-\mu}{\sigma/\sqrt{n}},\quad T_2=\frac{X_i-\mu}{\sigma}.$$ Both $N(0,1)$-distributed.

Maybe you start to see the point. Let $R_1$ denote the rejection region for $T_1$ and $R_2$ for $T_2$. It is natural to reject whenever the observed estimator is far away from some given constant $c$ (i.e. the observed estimated mean is so large that we go for $H_a$). That is $$R_1 = \{\hat{\mu}_1 > c_1\},\quad R_2=\{\hat{\mu}_2>c_2\}.$$

Set a significance level of $\alpha$, then $$\alpha = P(R_1|H_0) = P(R_2|H_0),$$ in other words, $$\alpha = P\left(\hat{\mu}_1>c_1|H_0\right) = P\left(\hat{\mu}_2>c_2|H_0\right).$$

In yet other words, $$\alpha = P\left(N(0,1)>\frac{c_1-\mu_0}{\sigma/\sqrt{n}}\right) = P\left(N(0,1)>\frac{c_2-\mu_0}{\sigma}\right).$$

Hence, the rejection regions are: $$R_1=\{\overline{X}_n > \mu_0 + z_{\alpha}\frac{\sigma}{\sqrt{n}}\}, \quad R_2=\{X_i > \mu_0 + z_{\alpha}\sigma\}.$$

As you can see, both regions are different ($R_2$ is "more strict") and hence the decisions are indeed different.

You can think of other examples such as $\hat{\mu}_3=Me(X_1,\dots,X_n)$ (the median) which is also unbiased but of course its distributions is more complex so for the example I chose the simple estimator $X_i$ for some fixed $i$.

1
On

The only way two different statistics can be decision-equivalent is if the two quantities are functions of each other, so that the percentiles, $\alpha$'s, level sets and rejection regions are matched.

A 2-sided hypothesis test using the absolute value of $t$ is the same as a hypothesis test based on $t^2$. This is assuming that if the question asked in the firt test is whether $|t|>c$, the second test would ask the probabilistically identical question whether $t^2 > c^2$.

If you mean actually substituting the second statistic into a hypothesis test based on the first one, such as asking if $t^2 > c$ instead of $|t| > c$ : that is no longer a hypothesis test. For the substituted question to always give the same answers, the two statistics must be equal.