I'm trying to (efficiently) prove that a transformation of the sum of a sequence of $n$ Bernoulli$(p)$ trials ($X_1,\dots,X_n$) converges in distribution to $N(0,1)$.
Specifically, if we denote $B = \sum_n X_i$, we would like to show that $\sqrt{\frac{n}{B(n-B)}}(B-np) \xrightarrow{D} N(0,1)$.
In my mind, the easiest way to do this would be to calculate the MGF of the transformed distribution and take the limit as $n \rightarrow \infty$, however this seems awfully messy. I also thought that the transformation looked awfully close to the form required to use the Central Limit Theorem, but obviously not the same.
Any thoughts on an elegant proof?
MGF does sound like a good approach, however you could consider the following: $$ X:= B-np, ~~~ Y:= \sqrt{\frac{n}{B(n-B)}} $$ Then $$ Y = \sqrt{\frac{1}{\frac{B(n-B)n}{n^2}}} \to \sqrt{\frac{1}{np(1-p)}} $$ since $\frac{B}{n} \to p$ in probability (property of MLE). Similarly, $\frac{n-B}{n} \to (1-p)$.
Knowing that convergence in probability is preserved under continuous transformations (see for example these notes), we get $$Y \to \sqrt{\frac{1}{np(1-p)}} \text{ in probability} $$ which is the SD of the Binomial distribution. In addition, $X \to N(0,~ np(1-p))$ in distribution - the Gaussian approximation to Binomial distribution (see e.g. Wiki).
Combining, $XY \to \sqrt{\frac{1}{np(1-p)}} * N(0,~ np(1-p))$ in distribution and hence the result.