Sample proportion and the Central Limit Theorem

978 Views Asked by At

Suppose that $ (\Omega,\Sigma,\mathsf{P}) $ is a probability space and that $ (X_{k})_{k \in \mathbb{N}} $ is a sequence of i.i.d. Bernoulli trials on $ (\Omega,\Sigma,\mathsf{P}) $, each with probability of success $ p \in (0,1) $. If we define another sequence $ (\hat{P}_{n})_{n \in \mathbb{N}} $ of random variables on $ (\Omega,\Sigma,\mathsf{P}) $ by $$ \forall n \in \mathbb{N}: \qquad \hat{P}_{n} \stackrel{\text{df}}{=} \frac{1}{n} \sum_{k = 1}^{n} X_{k}, $$ then according to the Central Limit Theorem, we have $$ \forall z \in \mathbb{R}: \qquad \lim_{n \to \infty} \mathsf{P} \! \left( \frac{\hat{P}_{n} - p}{\sqrt{p (1 - p) / n}} \leq z \right) = \Phi(z), $$ where $ \Phi $ denotes the standard normal c.d.f. For each $ n \in \mathbb{N} $, we call $ \hat{P}_{n} $ a sample proportion for a sample of size $ n $.

When most statistics textbooks discuss confidence intervals for a sample proportion, they implicitly claim that $$ \frac{\hat{P}_{n} - p}{\sqrt{\hat{P}_{n} (1 - \hat{P}_{n}) / n}} \stackrel{\text{d}}{\longrightarrow} \operatorname{N}(0,1), $$ which is the same as saying that $$ \forall z \in \mathbb{R}: \qquad \lim_{n \to \infty} \mathsf{P} \! \left( \frac{\hat{P}_{n} - p}{\sqrt{\hat{P}_{n} (1 - \hat{P}_{n}) / n}} \leq z \right) = \Phi(z). $$ However, I was unable to rigorously establish this claim using the Central Limit Theorem.

Could anyone kindly provide references? Thanks!

1

There are 1 best solutions below

7
On BEST ANSWER

The most straightforward proof of this result requires knowledge of Slutsky's theorem, which in turn requires the concept of convergence in probability. Write $$\frac{\hat{P}_{n} - p}{\sqrt{\hat{P}_{n} (1 - \hat{P}_{n}) / n}}= \frac{\hat{P}_{n} - p}{\sqrt{p(1-p) / n}} \cdot \sqrt{ \frac{p(1-p)}{\hat P_n(1-\hat P_n)}}, $$ a product of two factors. The first factor converges in distribution to the standard normal, by the central limit theorem. The second factor converges almost surely to the constant value $1$, by the law of large numbers. Now apply Slutsky's theorem, since convergence a.s. implies convergence in probability.