Finding P(Error) in a Hypothesis Test for Population Mean $\mu$

118 Views Asked by At

I am trying to find the probability of making any error (Type I or Type II) in a hypothesis test for a population mean $\mu.$

I tried to use Bayes Theorem:

$P($Error$)=P($Reject $H_0|H_0$ True$)P(H_0$ True$) +P($Don't Reject $H_0|H_0$ False$)P(H_0$ False$)$

This becomes:

$P($Error$)=\alpha P(\mu=\mu_0) +\beta P(\mu\neq\mu_0)$

Which you could also write as

$P($Error$)=(\alpha - \beta) P(\mu=\mu_0) +\beta$

However, I am having some difficulty proceeding from this step since $\mu$ is not a random variable. An expression like $P(\mu=\mu_0)$ isn't well-defined in the empirical sense...

If I treat $\mu$ as a continuous random variable (in the Bayesian sense), I end up with $P(\mu=\mu_0)=0$ and so $P($Error$)=\beta.$

However, if assign a symmetric Bernoulli prior and say

$P(H_0$ True$)=\frac{1}{2}=P(H_0$ False$)$

Then $P($Error$)=\frac{\alpha + \beta}{2}$ and $P($Correct Decision$)=1-\frac{\alpha + \beta}{2}$?

I feel like this is epic symbol pushing. Can someone please help me? Thank you!

1

There are 1 best solutions below

0
On BEST ANSWER

If you don't treat $\mu$ as a random variable, then there is no such thing as the general probability to make an error. If $\mu$ is not a random variable, then it is a fixed value. This means, you are either either in a universe where $H_0$ is true (then the probability to make an error is given by $\alpha$) or where $H_0$ os false (then the probability to make an error is given by $\beta$). So if $\mu$ is not random, then the error probability depends on your prior assumptions whether $H_0$ is true and false, and such an assumption has to be made beforehand in order to compute the error probability.

If $\mu$ is a discrete random variable with probability $P(\mu = \mu_0)>0$, then the general error probability can be computed in the way you described above (the line where you say you used Bayes theorem, even though you probably wanted to say "law of total probability" instead of "Bayes theorem" :-) ).

If $\mu$ is a continuous random variable, then (as you yourself pointed out) indeed $P(\mu = \mu_0) = 0$ and the error probability is $\beta$. Though, in this case it seems not very reasonable and natural to talk about the probability to falsely accept or reject $H_0$, since $H_0: \mu = \mu_0$ is literally false with probability $1$. In this case, it would be more natural to adapt your $H_0$ to something like $H_0: \mu \in (\mu_0-\varepsilon, \mu_0+\varepsilon)$ for some fixed $\varepsilon > 0$.