I'm trying to understand the following approximation to $\dbinom{N}{N/2}$ stated in MacKay's Information Theory, Inference, and Learning Algorithms book (p. 17): $\dbinom{N}{N/2} \approx 2^N \frac{1}{\sqrt{2 \pi N/4}}$. Using the binomial distribution with $p = 1/2$, is noted that:
$$ 1 = \sum_K \dbinom{N}{K} 2^{-N} \approx 2^{-N} \dbinom{N}{N/2} \sum_{r=-N/2}^{N/2} e^{-r^2/2 \sigma^2} \approx 2^{-N} \dbinom{N}{N/2} \sqrt{2 \pi} \sigma $$
Can someone point out to me why $\sum_K \dbinom{N}{K} 2^{-N} \approx 2^{-N} \dbinom{N}{N/2} \sum_{r=-N/2}^{N/2} e^{-r^2/2 \sigma^2} \approx 2^{-N} \dbinom{N}{N/2} \sqrt{2 \pi} \sigma$?
You get a faster result using the Stirling approximation $$ n!\simeq\sqrt{2\pi n}\left(\frac{n}{e}\right)^n $$ Using $N=2n$ this gives $$ \binom{2n}{n}=\frac{\sqrt{2\pi N}N^N}{2\pi n(n^n)^2}=\frac{2^N}{\sqrt{2\pi·N/4}} $$
The cited derivation uses that for large $N$ the binomial distribution can be approximated by a normal distribution with mean $\mu=\frac{N}2$ and variance $σ^2=N/4$. Then the probability for the event $X=k$, $p_k=2^{-N}\binom{N}{k}$ can be approximated via the normal density (and some mean value theorem) as $$ p_k\simeq \frac1{\sqrt{2\pi}σ}e^{-\frac{(k- μ)^2}{2σ^2}} =\frac1{\sqrt{2\pi·N/4}}e^{-2N(k/N- 1/2)^2} $$ That this approximation is valid at least for $k\sim N/2$ uses again the Stirling formula, so that not much insight is gained from this approach.