Computing the error rate of rejecting the null hypothesis (from the decision theoretic perspective)

30 Views Asked by At

Consider the following passage from the bottom of pg. 23 of Statistical Decision Theory and Bayesian Analysis:

enter image description here

Problem. I'm having trouble understanding the red highlighted; it seems that the textbook has computed the error not of rejecting but of either falsely rejecting or incorrectly accepting the null hypothesis.

Why I think this is so. First, the "0-1" loss can be displayed on the following table:

$$\begin{array}{c|c|c|} & \text{$H_0: θ = θ_0$ } & \text{$H_1: θ = θ_1$ } \\ \hline \text{$θ = θ_0$} & 0 & 1 \\ \hline \text{$θ = θ_1$ } & 1 & 0 \\ \hline \end{array}$$

From this table it is clear that

$$ R(θ_0, δ) = (P_{θ_0}(\text{Keep $H_0$}) ⋅ 0) + (P_{θ_0}(\text{Type I error}) ⋅ 1) = α_0 = P_{θ_0}(\text{Type I error}) $$

and

$$ R(θ_1, δ) = (P_{θ_1}(\text{Type II error}) ⋅ 1) + (P_{θ_1}(\text{Reject $H_0$}) ⋅ 0) = α_1 = P_{θ_1}(\text{Type II error}) $$

Now suppose, as in the passage, that $α_0 = 0.01$ and $α_1 = 0.99$. Then

$$ \frac{α_0 + α_1}{2} = \frac{0.01 + 0.99}{2} = 0.5 $$

But this doesn't seem to show that "half of all rejections of the null will actually be in error"; instead, it shows that half of all tests (which result in either a rejection or an acceptance of $H_0$) will lead to error.

What am I missing?


EDIT. I think I understand the solution. Basically, in order to compute the error rate when rejecting, one must calculate:

$$ \frac{P_{\theta_0}(H_1)}{P_{\theta_0}(H_1) + P_{\theta_1}(H_1)} = \frac{\alpha_0}{P_{\theta_0}(H_1) + P_{\theta_1}(H_1)} = {0.01 \over 0.01 + 0.01} = 0.5 $$

Is this the correct?