Starting at the origin and taking one step left or right with equal probability, what is the probability that you'll end up at 0 after 10,000 steps?
I figured it'd be $\frac1{2^{10000}}\binom{10000}{5000}$ since you will be taking half of the steps in one direction and half in the other in no particular order and then divide the number of all possible paths that land you at 0 by the total possible number of paths.
I got probability of about 0.008.
But how do I get this result using central limit theorem?
We shall consider Bernoulli random variables $X_i$ that take on the value $1$ with probability 1/2 corresponding to a step to the right at step i, and the value $0$ with probability 1/2 corresponding to a step to the left at step i. Define
$$Y = \sum_{i=1}^{1000}X_i.$$
We wish to approximate
$$P(Y = 5000) = {10000 \choose 5000}\frac{1}{2^{10000}}$$
the probability that we end up at $0$ after 10000 steps. We shall use the DeMoivre-Laplace limit theorem (Feller $VII.3$ $3.16$) which is generalized by the central limit theorem (Feller $X.1$) to approximate this central binomial probability as
$$P(Y=5000) = P\left(-0.5 \le \frac{Y - 5000}{\sqrt{2500}} \le 0.5 \right)$$
$$\approx \Phi(0.5/\sqrt{2500}) - \Phi(-0.5/\sqrt{2500}) \approx 0.00797871$$
where $\Phi(x)$ is the cumulative standard normal distribution. The 2500 came from the variance of the binomial distribution npq where n = 10000, p=q=1/2. We are in effect approximating the binomial distribution of Y with mean np and variance npq with a normal distribution of the same mean and variance. Then we are normalizing Y so that we may use the standard cumulative normal distribution. The factors of +/- 0.5 come about not from CLT, but from the fact that we are approximating the discrete binomial distribution with a continuous normal distribution. This is done such that the interval between -0.5 and +0.5 on the normal will correspond naturally to $0$ on the binomial.
Compare this numerical result to the exact answer which to 8 decimal places is 0.00797865. This is slightly more accurate than the result of the DeMoivre-Laplace theorem based on the mass function used in my other answer as well as the answer of D. Thomine which gives 0.00797885. In fact, we may verify that for even n from 2 to several billion, the method in this answer is more accurate than that one, with the biggest differences coming at small n. By performing series expansions, one can verify that this will be the case for all even n.
Note that some sources refer to this as the DeMoivre-Laplace central limit theorem.
Now an objection has been raised that says we can't use CLT to approximate this probability. Approximating a probability by CLT means that we identify a probability which CLT says will converge as $n\rightarrow \infty$, and then we approximate it for some finite n by the value CLT gives as the limit. The approximation will be good to the extent that the probability has converged by that value of n. It is also possible to bound the error with something like the Berry-Esseen theorem if desired, and for Bernoulli random variables, the error decreases ~$1/\sqrt{n}$.
For our case, CLT takes the form of the DeMoivre-Laplace limit theorem for Bernoulli random variable which states
$$P\left(a\sqrt{npq} \le Y_n-np \le b\sqrt{npq}\right)\rightarrow \Phi(b)-\Phi(a)$$
with
$$Y_n = \sum_{i=1}^nX_i$$
and $p=q=1/2$. Now the objection concerns the fact that a and b must be constants so that the interval grows as $\sqrt{n}$. We are in fact setting a and b to constants, specifically $a=-0.5/\sqrt{2500}$, and $b = +0.5/\sqrt{2500}$, not $\pm0.5/\sqrt{npq}$. CLT then says that the probability that the walker is within $0.5/\sqrt{2500}$ standard deviations of $0$ converges as the RHS above. The question asked about that probability for $n=10000$. We are using CLT to approximate that probability, and the approximation will be as good as the convergence of that probability by $n=10000$. But note that this probability does NOT correspond to the probability that the walker will be at $0$, that is $P(Y_n=np)$, for all n. It only coincides with that probability for n around 10000. But that's OK because we weren't asked to provide $P(Y_n=np)$ and show it converges by CLT. We were asked to approximate $1$ probability using CLT in some way as defined above. The objection is concerned about $P(Y_n=np)$ on an interval which does not grow as $\sqrt{n}$ as CLT requires. We are concerned with $P(|Y_n| < 0.5/\sqrt{2500})$ on an interval which does grow as $\sqrt{n}$ as CLT requires.
Now if we were to use this method for general n, always making a and b $\pm0.5/\sqrt{npq}$, then we could not show convergence of our probabilities $P(|Y_n| < 0.5/\sqrt{npq})$ as $n\rightarrow \infty$ using the standard version of CLT above which requires that a and b be constants. However, we could still show convergence of these probabilities because we could apply a theorem (Feller VII.$3$ theorem 1) which allows us to relax this constraint for Bernoulli random variables so that a and b may be decreasing functions of n which go as ~$1/\sqrt{n}$ as we would require.
Specifically, D. Thomine has objected that the above method is "false as it stands, because if a and b are too small, then you need in general very large values of n before [convergence]. In particular, you cannot take a,b ~$1/\sqrt{n}$." We have seen that this is false due to the fact that we have Bernoulli random variables and can apply a special theorem that guarantees convergence. It has also been shown false by direct calculation demonstrating that it converges faster and better than the D. Thomine approximation method based on the mass function for all even values of n starting with 2 and continuing into the billions. Series expansions for the approximations in question as well as for the binomial have been derived which, barring errors, shows that the above method gives a value closer to the exact binomial value for all even values of n, in agreement with all calculations to date.
So in summary, we need not show convergence of our approximation method for $P(Y_n=np)$, and we cannot show it with the CLT above. However,this approximation method does produce values of $P(Y_n=np)$ which converge,and this can be shown with a theorem that applies specifically to Bernoulli random variables which is routinely used in conjunction with the DeMoivre-Laplace limit theorem for Bernoulli random variables (see examples in Feller VII.4).