Law of Large Numbers for Martingale Difference Sequences (in probability)

431 Views Asked by At

I've been told in class that, given a Martingale Difference Sequence (MDS), $(X_t)_{t\geq0}$, if $\mathbb{E}|x_t|^p < \infty$ for some $p> 1$, then $$ \frac{1}{T}\sum_{t=0}^T X_t \overset{P}{\to} \mathbb{E}[X_t] = 0. $$ However, the proof that has been given in class is completely wrong because it uses the inequality $$ \mathbb{E}\big| \frac{1}{T}\sum_{t=0}^T X_t\big| \leq \mathbb{E}\big|\frac{1}{T}\sum_{t=1}^T X_t \mathbb{1}_{\{|X_t|>M|\}}\big| $$ for some constant $M$ "sufficiently large", but that does not work. The idea of the proof is to apply the Markov Theory at some point.

On the other hand, I have been seen that a similar result is proven by John Elton (1981) in The Annals of Probability. However, he assumes independence. Is independence necessary for this result to be true? Or is it an hypothesis that can be relaxed?

Thanks

1

There are 1 best solutions below

1
On BEST ANSWER

In the paper by Chow (1971), a stronger result was shown: if $(X_t)_{t\geqslant 1}$ is a uniformly integrable martingale difference sequence, then $T^{-1}\mathbb E\left\lvert\sum_{t=1}^TX_t \right\rvert\to 0$.

In particular, one does not need moments of order $p>1$.

Let us explain the idea. We consider the truncated version of $X_t$ defined by $$ X_{t,\leqslant M}:=X_t\mathbf{1}\{\lvert X_t\rvert\leqslant M\}-\mathbb E\left[X_t\mathbf{1}\{\lvert X_t\rvert\leqslant M\}\mid\mathcal F_{t-1}\right] $$ and the tail part $$ X_{t,\gt M}:=X_t\mathbf{1}\{\lvert X_t\rvert\gt M\}-\mathbb E\left[X_t\mathbf{1}\{\lvert X_t\rvert\gt M\}\mid\mathcal F_{t-1}\right]. $$ In this way, $X_t=X_{t,\leqslant M}+X_{t,\gt M}$ and $\left(X_{t,\leqslant M}\right)_{t\geqslant 1}$ and $\left(X_{t,\gt M}\right)_{t\geqslant 1}$ are martingale difference sequences.

Let $\tau\colon M\mapsto \sup_{t\geqslant 1}\mathbb E\left[\left\lvert X_t\right\rvert\mathbf{1}\{\left\lvert X_t\right\rvert>M\}\right]$; by definition of uniform integrability, $\tau(M)\to 0$ as $M$ goes to infinity.

Observe that $$\tag{0} \frac 1T\mathbb E\left\lvert\sum_{t=1}^TX_t \right\rvert\leqslant \frac 1T\mathbb E\left\lvert\sum_{t=1}^TX_{t,\leqslant M}\right\rvert+\frac 1T\mathbb E\left\lvert\sum_{t=1}^TX_{t,\gt M}\right\rvert; $$ therefore, we have to bound the contribution of each terms. The second one is easier to treat: we have $$ \frac 1T\mathbb E\left\lvert\sum_{t=1}^TX_{t,\gt M}\right\rvert\leqslant 2\tau(M)\tag{1}. $$ For the first one, we use the fact that the random variables $\left(X_{t,\leqslant M}\right)_{t\geqslant 1}$ are pairwise orthogonal. We get that $$ \frac 1T\mathbb E\left\lvert\sum_{t=1}^TX_{t,\leqslant M}\right\rvert \leqslant \frac 1T\sqrt{ \mathbb E\left[\left(\sum_{t=1}^TX_{t,\leqslant M}\right)^2\right] }=\frac 1T\sqrt{ \sum_{t=1}^T\mathbb E\left[X_{t,\leqslant M} ^2\right] }. $$ Then $$ \mathbb E\left[X_{t,\leqslant M} ^2\right]\leqslant 4\mathbb E\left[X_t^2\mathbf{1}\{\lvert X_t\rvert\leqslant M\}\right]\leqslant 4M\tau{0}\tag{2} $$ hence the combination of (0), (1) and (2) gives $$ \frac 1T\mathbb E\left\lvert\sum_{t=1}^TX_t \right\rvert\leqslant 2\frac{\sqrt M}{\sqrt T}\sqrt{\tau(0)}+2\tau(M). $$ Take $M=\sqrt T$ to conclude.