Central Limit Theorem for asymptotically i.i.d. random variables

284 Views Asked by At

I have a ergodic stationary sample of independent column random vectors $\{\mathbf x_1, \ldots,\mathbf x_n\} \equiv \mathbf X_n$, $\mathbf x_i \in \mathbb R^k$, with finite moments and cross-moments. I have a single-valued function of the sample $g(\mathbf x_i,\mathbf X_n)$ which can be thought of as a function of $\mathbf x_i$ and of some estimator of an unknown parameter. Using a bar to denote sample means, I consider the quantity

$$\mathbf y_i\equiv (\mathbf x_i- \mathbf {\bar x}_i)\cdot \left[g(\mathbf x_i,\mathbf X_n)-\bar g(\mathbf x_i,\mathbf X_n)\right],\;\; \mathbb E[\mathbf y_i] \neq 0$$

Due to the existence of $\mathbf X_n$ in $g()$, the $\mathbf y_i$'s are identically distributed but dependent. In fact their dependence (say, their covariance) is the same between any two $ (\mathbf y_i, \mathbf y_j)$. More over, as the sample size changes, the common marginal distribution of the $ \mathbf y_i$'s changes also.

The following hold, as the sample size $n$ tends to infinity:

$$(\mathbf x_i- \mathbf {\bar x}_i) \xrightarrow{p} (\mathbf x_i- \mathbb E [\mathbf x_i]),\;\;\; \left[g(\mathbf x_i,\mathbf X_n)-\bar g(\mathbf x_i,\mathbf X_n\right] \xrightarrow{p} \left[g(\mathbf x_i)-\mathbb E[g(\mathbf x_i)]\right]$$

which reflects that case where asymptotically the estimator converges to a constant and the dependence of $g()$ on the whole sample vanishes:

$$\mathbf y_i \xrightarrow{p}(\mathbf x_i- \mathbb E [\mathbf x_i])\cdot \left[g(\mathbf x_i)-\mathbb E[g(\mathbf x_i)]\right]\equiv \mathbf z_i,\;\;\; \mathbb E(\mathbf z_i)= \mathbf 0$$

So while the dependence between any two $\mathbf y_i$'s is not affected by their distance in the sequence (the index does not reflect any natural order like, say, time, so any permutation is valid), it diminishes for all $\mathbf y_i$'s as sample size increases, and vanishes asymptotically.

Therefore the limiting $\mathbf z_i$'s are zero-mean, independent and identically distributed, and we have that

$$ n^{-1/2}\sum_{i=1}^n\mathbf z_i \xrightarrow{d} \mathbf N(\mathbf 0, \mathbf V_z)$$

I need to be able to state that

$$n^{-1/2}\sum_{i=1}^n\mathbf y_i \xrightarrow{d} \mathbf N(\mathbf 0, \mathbf V_z)$$

but with my limited knowledge, I suspect that going from the infinite sum of the limiting i.i.d. rv's "back" to the infinite sum of the dependent convergent rv's in order to obtain the CLT of interest, requires additional conditions to be satisfied, or at least demands an actual proof (even if trivial). Looking at books, web, math.se etc, I could not find anything clear on the matter and so I am posting this question looking for guidance.