The answer should be no, but I can't seem to show it.
The central limit theorem for the binomial distribution with parameters $n, p$ ($\mu = np$, $\sigma^2 = np(1 - p)$) states that for for large $n$, if $S_n$ is the number of successes out of $n$ trials:
$$ P(a \leq \frac{S_n - np}{\sqrt{np(1 - p)}} \leq b) \approx \Phi(b) - \Phi(a)$$
The general central limit theorem theorem states that if $S_n = \sum_i X_i$, and each $X_i$ has the same distribution with mean and variance $\mu, \sigma^2$, then:
$$ P(a \leq \frac{S_n - n\mu}{\sqrt{n \sigma^2}} \leq b) \approx \Phi(b) - \Phi(a) $$
Suppose the $X_i \sim \text{Bin}(n, p)$, so $\mu = np$ and $\sigma^2 = np(1 - p)$. Plugging the mean and variance into the statement of the general central limit theorem gives us:
$$ P(a \leq \frac{S_n - n^2p}{\sqrt{n^2 p(1 - p)} \leq b} = P(a \leq \frac{S_n/n - np}{\sqrt{p(1 - p)}} \leq b) $$
This is clearly not similar to the CLT for binomial distributions. Succinctly: the problem is that there is an extra factor $n$ on $\mu$ and $n$ within the square root of the denominator in the general CLT, which the binomial-specific CLT does not have.
Why is that?