Number of samples needed to distinguish between two Bernoulli distributions

279 Views Asked by At

In a paper I am reading, they make the following claim offhandedly (paraphrased):

Suppose there are two Bernoulli distributions with means $p_0$ and $p_1$ where $\delta=p_1-p_0$. One distribution is chosen (by a fair coin flip) and then $n$ i.i.d. samples are drawn from it. The probability of error (incorrectly identifying the correct distribution based on the i.i.d. samples) is $\Omega(1)$ unless the number of samples is $n=\Omega(1/\delta^2)$.

What is the intuition/reasoning behind this claim? I am also confused about the setup (this claim was stated rather hastily in the paper), such as whether the two means are known or just their difference $\delta$ is known, and also what statistic (sample mean?) and procedure (pick the distribution whose mean is closer to sample mean?) is being used to infer the distribution. Any references would also be appreciated.