Sufficient condition for convergence in probability to imply convergence in $L^1$

1.1k Views Asked by At

I'm stuck on the following problem.

Let $\xi_1,\xi_2,...$ be positive random variables defined on the same probability space. Suppose $\xi_n \rightarrow \xi$ in probability. If in addition, $\lim\limits_{n\rightarrow\infty}E[\xi_n] = E[\xi]$, then prove that $\xi_n \rightarrow \xi$ in $L^1$.

My approach was the following: Since $|x| = x^+ + x^-$ and $x = x^+ - x^-$,

$$ E[|\xi_n - \xi|] = E[(\xi_n - \xi)^+] + E[(\xi_n - \xi)^-] = E[\xi_n - \xi] + 2E[(\xi_n-\xi)^-]$$ The first term on the right will converge to $0$ by assumption. However, I'm not sure how to deal with the second term, $E[(\xi_n-\xi)^-]$. Would convergence in probability imply that this goes to $0$? For $\epsilon > 0$,

$$ E[(\xi_n-\xi)^-] = \int_\limits{\{|\xi_n-\xi| > \epsilon\}}(\xi_n-\xi)^-dP + \int_\limits{\{|\xi_n-\xi| \leq \epsilon\}}(\xi_n-\xi)^-dP \leq \int_\limits{\{|\xi_n-\xi| > \epsilon\}}(\xi_n-\xi)^-dP + \epsilon$$

I'm tempted to say that the integral converges to $0$ since $P(|\xi_n-\xi|>\epsilon) \rightarrow 0$, but I can't be certain.

2

There are 2 best solutions below

0
On BEST ANSWER

You're basically there. We do have that $$\int_\limits{\{|\xi_n-\xi| > \epsilon\}}(\xi_n-\xi)^-dP \to 0$$ as $n \to \infty$. First we have the easy estimate $$\int_\limits{\{|\xi_n-\xi| > \epsilon\}}(\xi_n-\xi)^-dP \leq \int_\limits{\{|\xi_n-\xi| > \epsilon\}} \xi dP$$ since $\xi_n, \xi \geq 0$. So it suffices to show that the right hand integral goes to $0$ as $n \to \infty$. This follows from the slightly more general result:

Let $X$ be an integrable random variable. Then $\mathbb{E}[|X| 1_A] \to 0$ as $P(A) \to 0$.

To see this note that $$\int_A |X| dP = \int_{A \cap \{ |X| \leq N\}} |X| dP + \int_{A \cap \{|X| \geq N\}} |X| dP \leq NP(A) + \int_{\{|X| \geq N\}} |X| dP$$ Since $X$ is integrable, by taking $N$ large enough you can make the last integral as small as you want (say $< \varepsilon/2$). Then for $P(A) < \frac{\varepsilon}{2N}$ you have that the right hand side is less than $\varepsilon$.

3
On

You can basically use this answer: Convergence in measure implies convergence in $L^p$ under the hypothesis of domination. Basically, "yo dawg, i herd you like subsequences so i put a subsequence in your subsequence"

In particular, assume $\xi \in L^p$ and $\xi_n \to \xi$ in probability and $E[|\xi - \xi_n|^p] \not\to 0$. Then, there exists a subsequence $n_i \to \infty$ such that $E[|\xi - \xi_{n_i}|^p] \geq \epsilon$. Now, on this subsequence, by convergence in probability, we can pass to a subsequence $\xi_{n_k}$ which converges almost everywhere to $\xi$, i.e. $\xi_{n_k} - \xi \to 0$. But applying dominated convergence on this subsequence says that there is a subsequential limit such that $E[|\xi - \xi_{n_k}|^p] \to 0$ which is a contradiction.