Central Limit Theorem proof.

759 Views Asked by At

I am trying to understand the proof of the Central Limit Theorem in my book.

However, I don't really understand what is going on. I know the proof is assuming that the moment generating functions of each $W_{i}$ exists. Then we will show that eventually the limit of these generating functions approach $e^{t^2/2}$.

Can someone please explain what is happening?

This is almost in the middle of the proof.

Given the following. $M(0) = 1, M^{(1)}(0) = 0, M^{(2)}(0) = 1 $

They apply Taylor's theorem, then to write $M(t)$ and get

$M(t) = 1 + M^{(1)}(0)t + \frac{t^2}{2} M^{2}(r) = 1 + \frac{t^2}{2} M^{2}(r) $.

However, I don't understand why the proof stops after the second derivative in the Taylor expansion from the above expression.

In addition, I the book does not explain what it's doing in the following algebraic expression

.

$\lim_{n\rightarrow \infty}$$[M(\frac{t}{\sqrt n})]^n$ = $\lim_{n\rightarrow \infty}$ [$1 + \frac{t^2}{2n} M^{2}(s)]^n$ = exp $\lim_{n\rightarrow \infty} n$ $\ln[$1$ + \frac{t^2}{2n} M^{2}(s)]^n$ = exp $\lim_{n\rightarrow \infty} \frac{t^{2}}{2}M^{2}(s)$$\frac{ln[ 1 + \frac{t^2}{2n} M^{2}(s)] - ln(1)}{\frac{t^2}{2n} M^{2}(s)}$.

and $|s| < \frac{|t|}{\sqrt n}$.

I think they are taking natural log on both sides, and applying the the quotient rule.

Can someone please help me understand the above. I would really appreciate it, since I will be able to understand the proof.

Thank you very much.

2

There are 2 best solutions below

0
On

First of all, $$ M(t)=\sum_{n=0}^{+\infty}\frac{M^{(n)}(0)}{n!}t^n=1+\frac{t^2}{2}+\sum_{n=3}^{+\infty}\frac{M^{(n)}(0)}{n!}t^n=1+\frac{t^2}{2}+o(t^2) $$ so the generatic function does not stop after $t^2$, but the remaining terms are negligeable. Now, the generatic function of $\frac{1}{\sqrt{n}}\sum_{i=1}^n W_i$ is $$ M\left(\frac{t}{\sqrt{n}}\right)^n=\left(1+\frac{t^2}{2n}+o\left(\frac{t^2}{n}\right)\right)^n=\exp\left(n\log\left(1+\frac{t^2}{2n}+o\left(\frac{t^2}{n}\right)\right)\right)=e^{\frac{t^2}{2}+o(1)} $$ when $n\rightarrow +\infty$. Here we used the fact that $\log(1+x)\sim x$ when $x\rightarrow 0$, this is what the book writes as $$ \log(1+x)=x\frac{\log(1+x)-\log(1)}{x}\sim x $$ because $\lim\limits_{x\rightarrow 0}\frac{\log(1+x)-\log(1)}{x}=\log'(1)=1$ with $x=\frac{t^2}{2n}$. In the end, $$ \lim\limits_{n\rightarrow +\infty}M\left(\frac{t}{\sqrt{n}}\right)^n=e^{\frac{t^2}{2}} $$ which is the generating function of a standard normal distribution, you can then conclude with the use of Lévy's continuity theorem (https://en.wikipedia.org/wiki/L%C3%A9vy%27s_continuity_theorem).

0
On

I agree that this proof is not phrased in the most clear of terms. I hope what I have to offer can help.

First off, the moment generating function of a generic random variable X is given by $M(t) = E[e^{tX}]$, where $E$ denotes the expected value. Let $f(x) = e^x$. From Calc 2 we learned that we can write $M(t)$ exactly as $$M(t) = E\left[\sum_{n=0}^{\infty}\,\frac{f^{(n)}(0)}{n!}(tX-0)^n\right] = E\left[\sum_{n=0}^{\infty}\,\frac{1}{n!}(tX)^n\right] = E\left[\frac{1}{0!} + \frac{tX}{1!} + \frac{t^2X^2}{2!} + \cdots\right].$$

This is just the expectation value of a Taylor series centered at 0. Perhaps what is less known is that we can also find $M(t)$ exactly for each $t$ even if we truncate the series at the second power. Specifically, given any $t$, there is some number $r$ between $0$ and $t$ such that

$$M(t) = E\left[\frac{f^{(0)}(0)}{n!}(tX-0)^0 + \frac{f^{(1)}(0)}{1!}(tX-0)^1 + \frac{f^{(2)}(r)}{2!}(tX-0)^2\right] = E\left[1 + \frac{tX}{1!} + e^{r}\frac{(tX)^2}{2!}\right] .$$

The key point, which is not made clear in the proof, is that this is not a quadratic function, and it is not an approximation, even though it looks like it on the surface. In reality, $r$ depends on $t$. It helps me to mentally replace $r$ by $r(t)$. If you realize this, then writing $M(t)$ this way is not so hard to believe. As stated in the proof supplied, we can "truncate" at the quadratic term due to Taylor's Theorem with Remainder, specifically, due to Lagrange's version of it. For a statement of the theorem, see, for example, https://www.youtube.com/watch?v=DP_pGQaNGdw or https://people.clas.ufl.edu/kees/files/TaylorRemainderProof.pdf

For the next part of your question, recall the definition of a (one-variable) derivative at a point $a$. It is: $$g'(a) = \lim_{\Delta x \to 0} \frac{g(a + \Delta x) - g(a)}{\Delta x}.$$ The algebraic manipulations performed in proof turn the limit into a form that matches the definition of a derivative. No rules like the quotient rule are being used here. Explicitly, multiply the numerator and denominator by $\frac{t^2}{2n}M^{(2)}(s)$, then use the fact that $ln(1) = 0$ to simply subtract it off the numerator without changing the limit.

To see how this applies to the proof, let $g(x) = \mathrm{ln}(x)$. Then $$g'(1) = \lim_{\Delta x \to 0} \frac{\mathrm{ln}(1 + \Delta x) - \mathrm{ln}(1)}{\Delta x}.$$

If we let $\Delta x = \frac{t^2}{2n}M^{(2)}(s)$, we see that the condition $\Delta x \to 0$ is equivalent to the condition $n \to \infty$. Thus, we also have

$$g'(1) = \lim_{n \to \infty} \frac{\mathrm{ln}(1 + \frac{t^2}{2n}M^{(2)}(s)) - \mathrm{ln}(1)}{\frac{t^2}{2n}M^{(2)}(s)}.$$

Also, the condition $|s| < \frac{|t|}{\sqrt{n}}$ comes from

  1. Taylor's Theorem (here, $s$ plays the role of $r$ in the above comment, and really $s = s(t)$) and
  2. the fact that we have evaluated $M$ at $\frac{t}{\sqrt{n}}$ rather than at $t$.

I haven't included every last detail, but do you think you can take it from here?