Please help me with problem 3.26 of 'Probability Theory' by Varadhan:
For each $n\in \mathbb{N}$, $\ X_{n,j}:(\Omega,P) \to \mathbb{R}$ with $1\leq j\leq k_n$ are $k_n$ independent random variables with $$P[X_{n,j}=1]=p_{n,j} \text{ and } P[X_{n,j}=0]=1-p_{n,j}.$$
Let $S_n:=\sum_{j} X_{n,j}$. Let $\lambda_n:=\sum_{j} p_{n,j}$ be the mean of the $S_n$.
Question: If $\lambda_n \to \infty$, show that the distribution of $\frac{S_n-\lambda_n}{\sqrt{\lambda_n}}$ converges to the standard normal distirbution.
Though not explicitly mentioned, I think the assumption that the $X_{n,j}$ be uniformly infinitesimal is required. That is, we assume $$\lim\limits_{n\to \infty} \sup\limits_{1\leq j\leq k_n} p_{n,j} = 0. $$
Attempt: This looks similar to the central limit theorem without the variables being identically distributed. So anyhow, I try to compute characteristic function -
\begin{equation} \begin{split} \widehat{\frac{S_n-\lambda_n}{\sqrt{\lambda_n}}}(t) &= \int\exp \left[it\frac{S_n(\omega)-\lambda_n}{\sqrt{\lambda_n}}\right]dP(\omega) \\ &= \int\exp\left[ \frac{itS_n(\omega)}{\sqrt{\lambda_n}}\right] \exp\left[\frac{-it\lambda_n}{\sqrt{\lambda_n}}\right]dP(\omega)\\ &= \exp\left[\frac{-it\lambda_n}{\sqrt{\lambda_n}}\right]\int\exp\left[\frac{it}{\sqrt{\lambda_n}}\sum x_j\right]d((X_{n,1},\dots,X_{n,k_n})_*P)(x_1,\dots,x_{k_n})\\ &= \exp\left[\frac{-it\lambda_n}{\sqrt{\lambda_n}}\right]\int\dots\int\exp\left[\frac{it}{\sqrt{\lambda_n}}\sum x_j\right]d((X_{n,1})_*P)(x_1)\dots d((X_{n,k_n})_*P)(x_{k_n})\\ &= \exp\left[\frac{-it\lambda_n}{\sqrt{\lambda_n}}\right] \prod \int \exp\left[\frac{itx}{\sqrt{\lambda_n}}\right]d((X_{n,j})_*P)(x)\\ &= e^{\frac{-it}{\sqrt{\lambda_n}}\sum p_{n,j}} \prod \left[(1-p_{n,j}) + p_{n,j}e^{\frac{it}{\sqrt{\lambda_n}}}\right]\\ &= \prod \left[(1-p_{n,j})e^{\frac{-itp_{n,j}}{\sqrt{\lambda_n}}} + p_{n,j}e^{\frac{it(1-p_{n,j})}{\sqrt{\lambda_n}}}\right]. \end{split} \end{equation}
Not sure what to do now. Maybe take log, but that doesn't really look too promising. Or should I be looking at the Levy-Khintchine representation?
Question: Why are people interested in such a sequence of $S_n$? I understand that the iid case with $p_{n,j}=1/2$ is talking about someone tossing a bunch of coins.
Edit: I see that this question is very similar although I'm not sure if it's exactly the same since there is some different assumption involving some variables following a Poisson distribution.
Ok, Maybe I was being lazy and should have used Levy-Khintchine representation + Accompanying laws theorem as indicated in the book:
For each $n$, we have $k_n$ independent variables $Y_{n,j}:=\frac{X_{n,j}-p_{n,j}}{\sqrt{\lambda_n}}$ as above and we want to show that the distribution $\mu_n$ of $\sum Y_{n,j}$ converges to the standard normal distribution.
Let $\alpha_{n,j} = (Y_{n,j})_*P$, the distribution of $Y_{n,j}$. Note that the $Y_{n,j}$ are uniformly infinitesimal and have truncated means $\int_{|x|\leq 1}x d\alpha_{n,j}(x)=0$. The Accompanying laws theorem then says $\mu_n$ converges to the standard normal if and only if the distributions $\lambda_n$ (defined below) also converge to the same limit. We have $$\lambda_n := (\beta_{n,j})^{*k_n} $$ where $\beta_{n,j}$ is defined by the property $\widehat{\beta_{n,j}}(t)= \exp(\widehat{\alpha_{n,j}}(t)-1)$.
If we compute the characteristic function of $\lambda_n$, we see \begin{equation} \begin{split} \widehat{\lambda_n}(t)= \prod\widehat{\beta_{n,j}}(t) &= \prod \exp\left(\widehat{\alpha_{n,j}}-1\right) = \exp\left(\sum\int(e^{itx}-1)d\alpha_{n,j}(x)\right)\\ &=\exp\left(\int\left(e^{itx}-1-it\theta(x)\right)dM_n(x) + ita_n\right) \end{split} \end{equation} where $M_n$ is the $\textit{admissible Levy measure}$ defined by $\sum d\alpha_{n,j}$ and $\theta(x)$ is the bounded continuous function defined by $x$ for $|x|\leq 1$ and $\text{sgn}(x)$ otherwise. Note, the only requirement on $\theta$ is that it be bounded continuous and have $|\theta(x)-x| < C|x|^3$ near $0$. The term $a_n$ is, of course, $\int \theta dM_n$.
The expression for $\widehat{\lambda_n}$ above is written as e$(M_n,0,a_n)$, the $\textit{Levy-Khintchine representation}$ for the infinitely divisible distribution $\lambda_n$. (The $0$ indicates that the variance term $\frac{-t^2\sigma^2}{2}$ within the exponential is absent.) For e$(M_n,0,a_n)$ to converge to the standard normal distribution e$(0,1,0)$, it is necessary and sufficient that
Proof of (1): \begin{equation} \int f dM_n = \sum \int f(x) d\alpha_{n,j}(x) = \sum \int f\left(\frac{X_{n,j}(\omega)-p_{n,j}}{\sqrt\lambda_n}\right)dP(\omega). \end{equation} Since $0$ is not in the support of $f$, since $p_{n,j}, X_{n,j}$ are uniformly bounded, and since $\lambda_n \to \infty$, one can observe that $(1)$ does indeed hold.
Proof of (3): \begin{equation} \begin{split} a_n = \int\theta dM_n &= \sum \int \theta(x) d\alpha_{n,j}(x)\\ &= \sum \left(\int_{|x|\leq 1} x d\alpha_{n,j}(x) + \int_{|x| > 1} \text{sgn}(x) d\alpha_{n,j}(x)\right) \end{split} \end{equation} Again, as we reasoned in $(1)$, the second summands must all be $0$ for $n$ large enough. More precisely, the support of the integrals in the second summands are zero, so we are left with computing (assuming $n$ large) \begin{equation} \begin{split} \sum \int_{|x|\leq 1} x d\alpha_{n,j}(x) &= \sum \int x d\alpha_{n,j}(x) \\ &= \sum \int \left(\frac{X_{n,j}(\omega)-p_{n,j}}{\sqrt\lambda_n}\right) dP(\omega) =0 \end{split} \end{equation} since these variables clearly have zero means.
Proof of (2): We compute \begin{equation} \begin{split} \int_{|x|\leq l} x^2 dM_n(x) &= \sum \int {\bf 1}_{\{|x|\leq l\}}\left(\frac{X_{n,j}(\omega) - p_{n,j}}{\sqrt\lambda_n}\right) \cdot\left(\frac{X_{n,j}(\omega) - p_{n,j}}{\sqrt\lambda_n}\right)^2dP(\omega)\\ &= \sum \int \left(\frac{X_{n,j}(\omega) - p_{n,j}}{\sqrt\lambda_n}\right)^2dP(\omega) \end{split} \end{equation} for $n$ large enough, since $\lambda_n\to \infty$. Thus we have, for large $n$, \begin{equation}\begin{split} \int_{|x|\leq l} x^2 dM_n(x) &= \sum \int \left(\frac{X_{n,j}(\omega) - p_{n,j}}{\sqrt\lambda_n}\right)^2dP(\omega)\\ &= \frac{1}{\lambda_n}\sum \int \left(X_{n,j}^2(\omega) + p_{n,j}^2 - 2p_{n,j}X_{n,j}(\omega)\right)dP(\omega)\\ &= \frac{1}{\lambda_n}\sum p_{n,j}(1-p_{n,j}) \end{split}\end{equation} which converges to $1$ by the infinitesimality assumption.