This question is from Amir Dembo's note Exercise 3.1.10 part (b). I will briefly describe the setting and state the question:
Let $R_{n}:=B_{1}+\cdots+B_{n}$ for mutually independent Bernoulli Random Variables $B_{k}$ such that $\mathbb{P}(B_{k}=1)=1-\mathbb{P}(B_{k}=0)=k^{-1}$. Show that Lindberg's Central Limit Theorem can be applied to the triangular array: $X_{n,k}:=\dfrac{1}{\sqrt{\log n}}(B_{k}-k^{-1})$.
There is an important result from part (a) that $$\dfrac{Var(R_{n})}{\log n}\longrightarrow 1\ \text{as}\ n\longrightarrow\infty,$$ and $$\mathbb{E}B_{k}=k^{-1}.$$
To apply Lindeberg's CLT, we need to verify that
$(1)$ $\mathbb{E}X_{n,k}=0$,
$(2)$ $\sigma_{n}:=\sum_{k=1}^{n}\mathbb{E}X_{n,k}^{2}\longrightarrow 1\ \text{as}\ n\longrightarrow\infty.$
$(3)$ The Lindberg's Condition that: for each $\epsilon>0$, we have $$g_{n}(\epsilon):=\sum_{k=1}^{n}\mathbb{E}[X_{n,k}^{2};|X_{n,k}|\geq\epsilon]\longrightarrow 0\ \text{as}\ n\longrightarrow\infty.$$
I have showed the first two, as follows:
By the important results from part (a), we have $\mathbb{E}{B_{k}}=k^{-1}$, and thus $\mathbb{E}(X_{n,k})=0$. On the other hand, since $B_{k}$ are independent, and by important results from part (a), we have \begin{align*} \sigma_{n}=\sum_{k=1}^{n}\mathbb{E}X_{n,k}^{2}&=\sum_{k=1}^{n}\mathbb{E}\Big[\dfrac{1}{\sqrt{\log n}}(B_{k}-k^{-1})\Big]^{2}\\ &=\dfrac{1}{\log n}\sum_{k=1}^{n}\mathbb{E}(B_{k}-k^{-1})^{2}\\ &=\dfrac{1}{\log n}\sum_{k=1}^{n}Var(B_{n})\\ &=\dfrac{Var(R_{n})}{\log n}\longrightarrow 1. \end{align*}
However, I do not know how to show the Lindeberg's Condition. Below is my attempt:
\begin{align*} g_{n}(\epsilon):&=\sum_{k=1}^{n}\mathbb{E}[X_{n,k}^{2};|X_{n,k}|\geq\epsilon]\\ &=\dfrac{1}{\log n}\sum_{k=1}^{n}\mathbb{E}[(B_{k}-k^{-1})^{2};|B_{k}-k^{-1}|\geq\epsilon\sqrt{\log n}], \end{align*} but then I got stuck.
Another attempt is to use $|X_{n,k}|\leq\dfrac{1}{\sqrt{\log n}}$, since $|B_{k}-k^{-1}|\leq 1$, but then I don't know how to impose this fact into the expectation, and the complicated sum.
Is there anything else I can try?
Thank you!
Okay, I think I figured this out.
To prove $g_{n}(\epsilon)\longrightarrow 0$, we can explicitly compute it but we can also use the very first definition of convergence, that is for $n\geq N$ where $N<\infty$ big enough, $|g_{n}(\epsilon)-0|\leq\delta$ where $\delta>0$ is arbitrarily fixed.
Thus, we just need to find an $N$ such that $g_{n}(\epsilon)$ is really close to $0$.
Note that $|B_{k}-k^{-1}|\leq 1$ for all $k=1,\cdots, n$ and thus $|X_{n,k}|\leq\dfrac{1}{\sqrt{\log n}}$. Thus, the relation of $$\epsilon\leq|X_{n,k}|\leq\dfrac{1}{\sqrt{\log n}}$$ only holds when $n\leq\exp(\epsilon^{-2})$.
Therefore, as long as $n>\exp(\epsilon^{-2})$, $g_{n}(\epsilon)=0$.
Thus, we actually show something better, for $n>N= \exp(\epsilon^{-2})$, $g_{n}(\epsilon)=0$ (not close to, they are identified).
Thus, Lindeberg's Condition is satisfied.
I will leave the post open for a day, and then I will answer my own question.
To prove $g_{n}(\epsilon)\longrightarrow 0$, we can explicitly compute it but we can also use the very first definition of convergence, that is for $n\geq N$ where $N<\infty$ big enough, $|g_{n}(\epsilon)-0|\leq\delta$ where $\delta>0$ is arbitrarily fixed.
Thus, we just need to find an $N$ such that $g_{n}(\epsilon)$ is really close to $0$.
Note that $|B_{k}-k^{-1}|\leq 1$ for all $k=1,\cdots, n$ and thus $|X_{n,k}|\leq\dfrac{1}{\sqrt{\log n}}$. Thus, the relation of $$\epsilon\leq|X_{n,k}|\leq\dfrac{1}{\sqrt{\log n}}$$ only holds when $n\leq\exp(\epsilon^{-2})$.
Therefore, as long as $n>\exp(\epsilon^{-2})$, $g_{n}(\epsilon)=0$.
Thus, we actually show something better, for $n>N= \exp(\epsilon^{-2})$, $g_{n}(\epsilon)=0$ (not close to, they are identified).
Thus, Lindeberg's Condition is satisfied.