I have an unknown probability distribution $p(x)$. I don't know what it is, but I know it is well-behaved (smooth and normalised and goes fast to 0 at infinity). I've created an illustrated example of what it might look like here.
I can't evaluate $p(x)$, but I can draw samples from it. I know from the central limit theorem that if I repeatedly draw $N$ samples, then the standard deviation of the error in the sample mean (relative to the true mean of the distribution) will be proportional to $1/\sqrt{N}$.
Now, say that instead I want to find the probability for a sample to be in some interval, from $x_a$ to $x_b$. In other words, I want to estimate
$$ \frac{\int_{x_a}^{x_b} p(x)\;\mathrm{d}x}{\int_{-\infty}^{\infty} p(x)\;\mathrm{d}x}. $$
I can estimate this by drawing $N$ samples, and finding the fraction of the number of samples that are in the interval I'm interested in.
My question is: Can I say something general about how my fast my estimated fraction, based on $N$ samples, converges towards the true value, as $N$ increases? (I realise I may be using the word "converge" in an imprecise manner here.)
I've done numerical experiments for a couple of cases, and it looks to me like the error in my estimate goes down as $1/\sqrt{N}$, however I would like to know if this is always (or often) true. My gut feeling says that this either has to do with the central limit theorem, or that it can somehow be transformed into a Monte Carlo integration problem, but I'm not sure.