I have a criterion defined in terms of a ration of the mean and standard deviation of a variable, e.g. $\mu/\sigma > a$ (assume the variable is non-negative). It seems that there are a number of ways to evaluate this given a sample of data
- $m_0/s > a$, where $m_0 = \frac{1}{n}\sum_{i}^{n}(x_i)$ and $s=\sqrt{\frac{1}{n-1}\sum_{i}^{n}(x_i - m_0)^2}$. Here $m$ is an unbiased estimate of $\mu$, but $s$ is a biased estimate of $\sigma$.
Squaring the ratio, one can instead calculate $m_0^2/s^2 > a^2$. However, while $s^2$ is an unbiased estimate of $\sigma^2$, $m_0^2$ now is a biased estimate of $\mu^2$.
Lastly, I can calculate $m_1^2/s^2 > a^2$, where $m_1^2=\frac{1}{n}\sum_{i}^{n}(x_i^2)-s^2$. Here $m_1^2$ is an unbiased estimate of $\mu^2$ and $s^2$ is an unbiased estimate of $\sigma^2$.
However, while in (3) both the numerator and denominator are unbiased, I am not sure if this means that the estimate of the ratio is unbiased, i.e. if $E[m_1^2/s^2] = \mu^2/\sigma^2$, although I think that it is biased.
This would be my first question: is the ration of unbiased estimates itself unbiased in general and in this case in particular (perhaps there is something special about the relationship between mean squared and variance that makes the ratio unbiased in this case)?
Lastly, it seems that by rewriting the original criterion as $\mu > a\sigma$ and then as $\mu^2 > a^2\sigma^2$, I can use the unbiased estimators from (3) for $\mu^2$ and $s^2$ and not worry at all about the ratio. This seems strange though that by simply rearranging the criterion, I can now evaluate it without any bias even though I am still using the same estimators. What am I missing here?
Add 1 While $E[\frac{1}{n}\sum_{i}^{n}(x_i^2)-s^2] = E[\frac{1}{n}\sum_{i}^{n}(x_i^2)]-E[s^2] = \mu^2 + \sigma^2 - \sigma^2 = \mu^2$ is unbiased, it seems that this estimator for $\mu^2$ has a large variance and that for any given sample it is not even guaranteed to be positive (!!).