Tail probabilities of identically distributed variables.

100 Views Asked by At

I'm struggling with following problem:

Let $X_1, X_2, \dots $ be identically distributed (not necessarily independent) non-negative random variables with finite expected values. Show that for any $\epsilon > 0$, $$\lim_{n \to \infty}P\left(\max_{1\le i\leq n} X_i > n\epsilon\right) = 0$$

I can easily solve it if the variables are independent (e.g. using Doob's martingale inequality), but I'm struggling to prove above without that assumption. I've tried e.g. bounding the probabilities in the following way:

$$P\left(\max_{1\le i\leq n} X_i > n\epsilon\right) \leq \sum P(X_i > n\epsilon)$$

But after applying Markov's inequality the bound seems 'too loose'.
I was also considering using Doob's inequality in that case as well, but I was not able to construct appropriate martingale.

The exact question I'm asking is: is the theorem in question true?

3

There are 3 best solutions below

0
On BEST ANSWER

The "crude bound" you suggest works: we write $$ \mathbb P\left(\max_{1\leqslant i\leqslant n}X_i>n\varepsilon\right)=\mathbb P\left(\bigcup_{i=1}^n\left\{X_i>n\varepsilon\right\}\right)\leqslant \sum_{i=1}^n\mathbb P\left(\left\{X_i>n\varepsilon\right\}\right). $$ Since the random variable $X_i$ have the same distribution as $X_1$, it follows that $$ \mathbb P\left(\max_{1\leqslant i\leqslant n}X_i>n\varepsilon\right)\leqslant n\mathbb P\left(\left\{X_1>n\varepsilon\right\}\right). $$ Integrating the pointwise inequality $$ n\varepsilon\mathbf 1\left\{X_1>n\varepsilon\right\}\leqslant X_1\mathbf 1\left\{X_1>n\varepsilon\right\} $$ gives that $$ n\varepsilon\mathbb P\left(\left\{X_1>n\varepsilon\right\}\right)\leqslant \mathbb E\left[X_1\mathbf 1\left\{X_1>n\varepsilon\right\}\right] $$ hence $$ \mathbb P\left(\max_{1\leqslant i\leqslant n}X_i>n\varepsilon\right)\leqslant \frac 1\varepsilon \mathbb E\left[X_1\mathbf 1\left\{X_1>n\varepsilon\right\}\right]. $$ The monotone convergence theorem allows to conclude.

0
On

For a fixed $N>0$, let $$ X_i = X_i 1_{\{X\le N\}}+X_i1_{\{X>N\}}=X_{i,1}+X_{i,2}. $$ Then $$ \max_{i\le n} X_i \le \max_{i\le n} X_{i,1}+\max_{i\le n}X_{i,2}\le N+\max_{i\le n}X_{i,2}. $$ Hence $$\begin{eqnarray} \Bbb P(\max_{i\le n} X_i >n\epsilon)&\le& \Bbb P(\max_{i\le n} X_{i,2} >n\epsilon-N)\\&\le&\Bbb P\left(\sum_{i\le n}X_{i,2} >n\epsilon-N\right)\\&\le&\frac{n\Bbb E[X_{1,2}]}{n\epsilon -N} \end{eqnarray}$$ and $$ \limsup_{n\to\infty}\Bbb P(\max_{i\le n} X_i >n\epsilon)\le \frac{\Bbb E[X_{1,2}]}{\epsilon}=\frac{\Bbb E[X_1 1_{\{X_1>N\}}]}{\epsilon}. $$ If we take limit $N\to\infty$, by dominated convergence theorem, it follows $$ \limsup_{n\to\infty}\Bbb P(\max_{i\le n} X_i >n\epsilon)\le \lim_{N\to\infty}\frac{\Bbb E[X_1 1_{\{X_1>N\}}]}{\epsilon}=0 $$as desired.

2
On

Indeed, Markov's inequality (for positive-valued r.v, $P(X>a)\le \frac{E(X)}{a}$) isn't strong enough here. For a stronger result, I'll turn to more general real analysis results:

Suppose $f$ is a nonnegative (Lebesgue) integrable function. Let $S(a)=\{x: f(x)\ge a\}$. Then $$\lim_{a\to\infty}a\cdot\mu(S(a)) = 0$$ Proof: Consider $a=2^n$ for integer $n$, and let $T_n = S(2^n)\setminus S(2^{n+1})$. We have $2^n \le f(x) < 2^{n+1}$ on $T_n$, so $$\sum_{n=-\infty}^{\infty} 2^n \mu(T_n) \le \int f \le \sum_{n=-\infty}^{\infty} 2^{n+1}\mu(T_n)$$ as the union of the $T_n$ is the set where $f$ is positive. In particular, those series converge. That means the tail end sum goes to zero: $$\lim_{N\to\infty}\sum_{n=N}^{\infty}2^n \mu(T_n) = 0$$ $$\lim_{N\to\infty}\sum_{n=N}^{\infty}2^n \mu(T_n) \ge \lim_{N\to\infty}\sum_{n=N}^{\infty}2^N \mu(T_n) = \lim_{N\to\infty}2^N \mu(S(2^N)) =0$$ That's the result for powers of $2$. For other values, if $2^n <a < 2^{n+1}$, $$a\mu(S(a)) \le 2^{n+1}\mu(S(2^n) \le 2\cdot 2^n\mu(S(2^n))\to 0$$ Done.

This inequality probably has a name, but I don't know it off the top of my head. The application? That was for integrals over any measure. If we let $\mu$ be a probability measure for a variable $X$, then $\mu(S(a)) = P(X\ge a)$. The integrability condition is the same as $X$ having a finite expected value, and the result becomes $$\lim_{a\to\infty} a\cdot P(X\ge a) = 0$$ for any nonnegative random variable $X$ with a finite expected value.

And that's the result you need to make this problem work. Apply the "crude bound", and $n\cdot P(X> n\epsilon)\to 0$ as $n\to\infty$.