Intuition of Lindeberg Condition

1.3k Views Asked by At

I'm reading Shiryaev's probability, where he discusses the Central Limit Theorem for normalized and centered sums $S_n$ of i.i.d random variables $X_1, \ldots X_n$, $n\geq 1$ under the classical Lindeberg condition. The condition states that for any $\epsilon>0$, as $n\to \infty$, for independent $X_1, X_2 \ldots$:

$$ (L) \ \ \ \ \frac{1}{D^2_n} \sum_{k=1}^{n} \int_{\{|x-m_k|\geq \epsilon D_n\}} (x-m_k)^2 dF_k(x) \to 0, $$ where $E X_n = m_k$, $S_n = \sum_{j=1}^{n} X_j$, $Var X_k = \sigma^2_k < \infty $ and also $D_n^2 = \sum_{k=1}^{n} \sigma^2_k$.

I read the proof of the TLC for the "triangle array" which requires this condition.

However, I don't seem to truly grasp the intuition behind the Lindeberg's condition. I mean, I understand the proof and I have no doubt that it holds, but I'm not sure what the condition (L) in fact would mean. Could anyone explain me the intuition behind the Lindeberg's condition? Perhaps with a simple example, or more theoretically. My goal is simply to grasp the intuition behind it.

1

There are 1 best solutions below

2
On BEST ANSWER

Essentially, what the Lindeberg CLT says is that given a collection of random variables which are normalized such that they fulfill the other condition (the one which looks like $\sum_{m=1}^n \mathbb{E} [X_{n,m} ^2] \rightarrow \sigma^2)$, and do not deviate too much from the mean, we can apply the central limit theorem.

Suppose condition $(L)$ does not hold. This means that there are more than a countable number of 'exceptional' random variables which have a high variance (i.e. a higher tendency to deviate significantly from the mean). That is, suppose that my random variables $\mathbb{E}[X_{n,m}] = 0$ for $m = 1, ..., n$, i.e. if we take their arithmetic mean, by the law of large numbers, it will converge to $0$. However, this does not mean that $X_{n,m} = 0$ a.s..

Consider the following example: Let $P(X_{n,m} = 1) = P(X_{n,m} = -1) = 1/2$. We know that $\mathbb{E}[X_{n,m}] = 0$ but $X_{n,m} \neq 0$. This is when condition $(L)$ comes in, we are preventing situations like this from happening, by stating that not only must the random variables have a finite expectation, but they are not allowed to deviate too much from it.

Also, I feel that Durrett's PTE gives quite a good treatment of this topic. Perhaps you can visit that too :)