I'm wondering if one could get similar results to the classical local limit theorem if one assumes that conditions, such as independence and identicallity of distribution of the random variables involved, only hold at the asymptotic level. In a sense this is similar to what Erdos and Kac did here, in connection to the central limit theorem, where the dependence of the Bernoulli variables $I_{n_p}$, for $p$ prime, tend to $0$ as $n \to \infty$.
Here's a problem I'm interested in (or rather in its applications). I wonder if you could point me in the right direction.
For $n \geq 1$ and each $i$ with $1 \leq i \leq n$, let $X_{n,i} : \Omega_n \to \{0, 1\}$ be Bernoulli random variables on a finite sample space $\Omega_n$ (whose size grows with $n$). Assume that $\mathbb{E}[X_{n,i}] = p + O(\epsilon(n))$, for some fixed $p$ with $0 < p < 1$, and some function $\epsilon(n) \to 0$ as $n \to \infty$. Further assume that $\mathbb{E}[X_{n,i} X_{n,j}] = \mathbb{E}[X_{n,i}] \mathbb{E}[X_{n,j}] + O(\epsilon(n))$ for $i \neq j$, i.e., the variables are "asymptotically independent". Let $S_n = \sum_{i=1}^n X_{n,i}$. I'm interested in an asymptotic Gaussian/normal/local limit theorem type approximation for $\mathbb{P}(S_n \geq n-k)$ (at the right tail) with $k > 0$ fixed and $n$ large. Given that $n \to \infty$ anyways, I feel the lack of independence of the variables at small scales should be irrelevant...