Uniform bound for law of large numbers

276 Views Asked by At

Let $(X_i)_i$ be a sequence of iid real valued random variables with finite variance. Is it true that given a bounded measurable function $f:\mathbb R^2\to \mathbb R$, then almost surely and uniformly in $k\in\{1,...,n\},$

$$\frac 1 n \sum_{i=1}^n f(X_i,X_{i+k}) \to \mathbb E(f(X_1,X_{2}))\,\,?$$

If I am not requiring the uniformity in $k$, then this result should simply follow from the ergodic law of large numbers. How could I get this uniformity?

1

There are 1 best solutions below

0
On BEST ANSWER

Here we only need boundedness of $f$ (no assumption on the variance of $X_1$); using Baum-Katz results for martingales with identically distributed increments, it is likely that we only need $\lvert f(X_1,X_2)\rvert^p$ to be integrable for some $p$.

We have to prove that $$ \max \limits_{1 \leqslant k \leqslant n} \left\lvert \frac{1}{n}\sum \limits_{i=1}^n f(X_i, X_{i+k}) - \mathbb{E}(f(X_1,X_2))\right\vert\to 0 $$ almost surely. To this aim, we will introduce a martingale by defining first the $\sigma$-algebras $\mathcal F_j:=\sigma(X_i,i\leqslant j)$ and let $d_{k,i}:=f(X_i,X_{i+k})-\mathbb E\left[f(X_i,X_{i+k})\mid\mathcal F_{i+k-1}\right]$. We have to show that $$\max \limits_{1 \leqslant k \leqslant n} \left\lvert \frac{1}{n}\sum \limits_{i=1}^n d_{k,i}\right\vert\to 0\mbox{ a.s. and }\max \limits_{1 \leqslant k \leqslant n} \left\lvert \frac{1}{n}\sum \limits_{i=1}^n \mathbb E\left[f(X_i,X_{i+k})\mid\mathcal F_{i+k-1}\right]-\mathbb E\left[f(X_i,X_{i+k}) \right]\right\vert\to 0\mbox{ a.s.}.$$ For the first part, it suffices to show the almost sure convergence to $0$ of $$ Y_N:=\max \limits_{1 \leqslant k \leqslant 2^N}\max_{1\leqslant n\leqslant 2^N} \left\lvert \frac{1}{2^N}\sum \limits_{i=1}^n d_{k,i}\right\vert. $$ By the Borel-Cantelli lemma, it suffices to show that for each positive $\varepsilon$, the series $\sum_N \mathbb P\left(Y_N>\varepsilon\right)$ converges. To do so, we first use a union bound $$ \mathbb P\left(Y_N>\varepsilon\right)\leqslant \sum_{k=1}^{2^N}\mathbb P\left(\ \max_{1\leqslant n\leqslant 2^N} \left\lvert \frac{1}{2^N}\sum \limits_{i=1}^n d_{k,i}\right\vert\gt\varepsilon\right). $$ Since for each $k$, $(d_{k,i})_{i\geqslant 1}$ is a bounded martingale difference sequence, we get by Azuma-Hoeffding's inequality (the version with maximum, or the classical version combined with Doob's inequality) that $$ \mathbb P\left(\ \max_{1\leqslant n\leqslant 2^N} \left\lvert \frac{1}{2^N}\sum \limits_{i=1}^n d_{k,i}\right\vert\gt\varepsilon\right)\leqslant c\exp\left(-2^N C\right), $$ where $c$ and $C$ are independent of $k$ and $N$.

We now have to treat $\max \limits_{1 \leqslant k \leqslant n} \left\lvert \frac{1}{n}\sum \limits_{i=1}^n \mathbb E\left[f(X_i,X_{i+k})\mid\mathcal F_{i+k-1}\right]-\mathbb E\left[f(X_i,X_{i+k}) \right]\right\vert$. Observe that by independence, $\mathbb E\left[f(X_i,X_{i+k})\mid\mathcal F_{i+k-1}\right]=\mathbb E\left[f(X_i,X_{i+k})\mid X_i\right]=\mathbb E\left[f(X_i,X_{0})\mid X_i\right]=g(X_i)$, where $g(x)=\mathbb E\left[f(x,X_{0})\right]$ hence $$\max \limits_{1 \leqslant k \leqslant n} \left\lvert \frac{1}{n}\sum \limits_{i=1}^n \mathbb E\left[f(X_i,X_{i+k})\mid\mathcal F_{i+k-1}\right]-\mathbb E\left[f(X_i,X_{i+k}) \right]\right\vert= \left\lvert \frac{1}{n}\sum \limits_{i=1}^n g(X_i ) -\mathbb E\left[g(X_i) \right]\right\vert$$ and we can use the strong law of large numbers.