Distribution of relative error.

97 Views Asked by At

Suppose I have a random variable $X$ with unknown mean $\mu$ and I can draw $n$ random samples (possibly from a Monte Carlo method, but I believe that's beside the point) from its distribution. I wish to determine $n$ as to keep my relative error $\left |\frac{\mu - \overline{X}}{\mu}\right|$ within a certain margin, say $1\%$ about $95\%$ of the times I draw those samples.

I have a few problems determining $n$ since I don't know the distribution of $\left |\frac{\mu - \overline{X}}{\mu}\right|$ and thus I can't compute the odds of it being within a margin. Is there a way to do this? Also how would that method change in case I didn't have $\sigma^2$ and had instead to rely on the sample variance $s^2$?

1

There are 1 best solutions below

5
On BEST ANSWER

$\bar X$ has mean $\mu$ and variance $\frac1n\sigma^2$

Using Chebyshev's inequality, you can say $P(|\mu -\bar X| \le t) \ge 1-\frac{\sigma^2}{nt^2}$ for $t>0$

You want to consider $t=0.01 |\mu|$ so you want $1-\frac{10000\sigma^2}{n |\mu|}\ge 0.95$, i.e. $n \ge \frac{200000\sigma^2}{|\mu|}$, which is a problem, since you do not know $\mu$ and if it is close to $0$ then there may be no satisfactory $n$ that you can discover.