Show that if X and Y are i.i.d random variables such that $\mathbb{E}(X-Y)^{2}<\infty$, then $\mathbb{E}X^{2}<\infty$

985 Views Asked by At

As stated in the title, I'd like to show

If X and Y are i.i.d random variables such that $\mathbb{E}(X-Y)^{2}<\infty$, then $\mathbb{E}X^{2}<\infty$

My attempt:

We prove that if $\mathbb{E}X^{2}=\infty$, then $\mathbb{E}(X-Y)^{2}=\infty$. To show this, we compute \begin{align*} \mathbb{E}(X)^{2}&=\int_{0}^{\infty}2x\mathbb{P}\Big(|X|>x\Big)dx\\ &=\int_{0}^{\infty}2x\mathbb{P}\Big(|X-Y|>x, Y=0\Big)dx\\ &\leq\int_{0}^{\infty}2x\mathbb{P}\Big(|X-Y|>x\Big)dx\\ &=\mathbb{E}(X-Y)^{2}, \end{align*} where the inequality is because $\{|X-Y|>x, Y=0\}\subset\{|X-Y|>x\}.$

However, I am really not sure if I am correct, because

$(1)$ I am not sure if the inequality is corret

$(2)$ I am not sure cannot I do the last equality, since $x$ corresponds to the distribution of $X$, so perhaps $\mathbb{E}(X-Y)^{2}=\int_{0}^{\infty}2z\mathbb{P}\Big(|X-Y|>z\Big)dz$? for some other $z$?

$(3)$ I did not use the $i.i.d$ property at all...

Could somebody have a check? Thank you!

If I was incorrect, it will be the best if one could provide an alternative, or a hint. Thank you!

Edit 2:

As "Kimchi Lover" pointed out, we don't need to assume the integrability of $X$ and $Y$ to prove my problem. He proved the second moment, but in the book from Feller, the result holds for any moment $\alpha>0$. The proof can be found exactly following the post of "Kimchi Lover", but I still think it will be the best if I put the proof here, for other users to look up easily.

Below is the proof from Feller, and it combined several lemmas:


Firstly, we claim that if $u$ is bounded and has a continuous derivative $u'$, then $$(***)\ \ \int_{a}^{b^{+}}udF=u(b)F(b)-u(a)F(a)-\int_{a}^{b}u'(x)F(x)dx.$$

Indeed, rearrange the above equality, we have $$\int_{a}^{b^{+}}[u(b)-u(x)]dF-\int_{a}^{n}u'(x)[F'(x)-F(a)]dx=0.$$ Suppose $|u'|<M$ and partition $a,b$ into congruent intervals $I_{k}$ of length $h$. It is easily seen that the contribution of $I_{k}$ to the LHS is in absolute value less than $2MhF\{I_{k}\}$. Summing over $k$ we find that the LHS is $<2Mh$, which can be made as small as we possible. Thus the LHS is $0$, as desired.

Now, apply the above claim, we can conclude that for any $\alpha>0$, we have $$(****)\ \ \int_{0}^{\infty}x^{\alpha}dF=\alpha\int_{0}^{\infty}x^{\alpha-1}[1-F(x)]dx,$$ in the sense that if one side converges, so does the other.

Indeed, because of the infinite interval of $(***)$ does not apply directly, but for every $b<\infty$, after rearrangement, we have $$\int_{0}^{b^{+}}x^{\alpha}dF=-b^{\alpha}[1-F(b)]+\alpha\int_{0}^{b}x^{\alpha-1}[1-F(x)]dx.$$

Suppose firstly that the integral on the left converges as $b\longrightarrow\infty$. The contribution of $\overline{(b,\infty)}$ to the infinite integral is $\geq b^{a}[1-F(b)]$, and thus this quantity tends to $0$. In this case, passage to the limit $b\rightarrow\infty$ leads the results. On the other hand, the integral on the left is smaller than the integral on the right and hence the convergence of the second imply the convergence of the first, which concludes the proof of $(****)$.

An analogue to $(****)$ holds for the left tail. Combining the two formulas, we get that the distribution $F$ possess an absolute moment of order $\alpha>0$ if and only if $|x|^{\alpha-1}[1-F(x)+F(-x)]$ is integrable over $\overline{(0,\infty)}$. (Conclusion 1)

Now, with the above conclusion, we can prove that if $X$ and $Y$ are independent random variables and $S=X+Y$, then for $\alpha>0$, $\mathbb{E}|S|^{\alpha}$ exists if and only if $\mathbb{E}|X|^{\alpha}$ and $\mathbb{E}|Y|^{\alpha}$ exists.

Indeed, since the variables $X$ and $X-c$ possess exactly the same moments, we can WLOG assume that $0$ is the median for both $X$ and $Y$. Then, we have $$\mathbb{P}(|S|>t)\geq\dfrac{1}{2}\mathbb{P}(|X|>t),$$ and by the (Conclusion 1), we know that $\mathbb{E}|S|^{\alpha}<\infty$ implies $\mathbb{E}|X|^{\alpha}<\infty$, which prove the direct $(\Rightarrow)$. To prove the converse assertion, we just use the inequality $|S|^{\alpha}\leq 2^{\alpha}(|X|^{\alpha}+|Y|^{\alpha})$.

3

There are 3 best solutions below

8
On BEST ANSWER

According to Feller, An Introduction to Probability Theory, vol 2, second edition, Lemma 3 on page 151, if $X$ and $Y$ are independent random variables, $E|X+Y|^\alpha$ exists if and only if both $E|X|^\alpha$ and $E|Y|^\alpha$ exist.

This, with the notational change that Feller's $Y$ is the OP's $-Y$, answers the original question without any extra condition.

Here is the argument, cut down to the problem at hand. Since for any fixed $a$ the moment $E|X-a|^2$ exists if and only if $E|X|^2$ exists, we may assume, without loss of generality, that $P(X\ge0)\ge 1/2$ and $P(X\le0)\ge1/2.$ (So that $0$ is a "median" for $X$.) If either of $X\le0, Y>t$ or $X\ge0, Y<-t)$ occurs then $|X-Y|>t$, so $$P(|X-Y|>t)\ge P(X\le0)P(Y>t) + P(X\ge0)P(Y<-t)\ge\frac 1 2 P(|Y|>t).$$ Now use the formula $E|T|^2=\int_0^\infty 2t P(|T|>t) dt$ on the extreme sides of the displayed inequality.

7
On

Let us prove first that $X,Y\in L^1(\mathbb P)$. We have $$(X-Y)^2=X^2+Y^2-2(X^+Y^+ + X^-Y^-) + 2(X^+Y^-+X^-Y^+)$$

and $\begin{aligned}X^2+Y^2-2(X^+Y^+ + X^-Y^-) &= [|X|^2+|Y|^2-2X^+Y^+]1_{X\geq 0}+[|X|^2+|Y|^2-2X^-Y^-]1_{X< 0}\\ &\geq [(X^+)^2+(Y^+)^2-2X^+Y^+]1_{X\geq 0}+[(X^-)^2+(Y^-)^2-2X^-Y^-]1_{X< 0}\\ &=(X^+-Y^+)^21_{X\geq 0} + (X^--Y^-)^21_{X< 0}\\ &\geq 0 \end{aligned}$

$(X-Y)^2$ is the sum of two non-negative terms, hence each term must have finite expectation, thus $E(X^+Y^-+X^-Y^+)<\infty$. In turn, this implies $E(X^+Y^-)<\infty$ and $E(X^-Y^+)<\infty$. By independence this yields the finiteness of $E(X^+)$, $E(X^-)$,$E(Y^+)$ and $E(Y^-)$, hence $X,Y\in L^1(\mathbb P)$.

Independence of $X$ and $Y$ implies that $XY\in L^1(\mathbb P)$. Note that $X^2+Y^2=(X-Y)^2 + 2XY$.

Since $L^1(\mathbb P)$ is a vector space, $X^2+Y^2\in L^1(\mathbb P)$, and since $E(X^2)=E(Y^2)\leq E(X^2+Y^2)<\infty$, we get $X\in L^2(\mathbb P)$ and $Y\in L^2(\mathbb P)$.

1
On

Here is a small twist on Kimchi lover's answer, which is slightly more direct. Using the same reduction, one may assume without loss of generality that $0$ is a median of $X$ (and thus of $Y$). Then:

$$\mathbb{E} ((X-Y)^2) \geq \mathbb{E} (X^2 \mathbf{1_{X \geq 0}} \mathbf{1_{Y \leq 0}}) + \mathbb{E} (X^2 \mathbf{1_{X \leq 0}} \mathbf{1_{Y \geq 0}}).$$

By independence,

$$\mathbb{E} ((X-Y)^2) \geq \mathbb{E} (X^2 \mathbf{1_{X \geq 0}}) \mathbb{P} (Y \leq 0) + \mathbb{E} (X^2 \mathbf{1_{X \leq 0}}) \mathbb{P} (Y \geq 0) \geq \frac{\mathbb{E} (X^2 \mathbf{1_{X \geq 0}})+ \mathbb{E} (X^2 \mathbf{1_{X \leq 0}})}{2} = \frac{\mathbb{E} (X^2)}{2}.$$

This avoids the formula relating the moments and the tails of $X$.