Questions on Showing $X_{n}Y_{n}\xrightarrow{P} XY$

34 Views Asked by At

Let $Y_{n}\xrightarrow{P}Y$ and $X_{n}\xrightarrow{P}X$ on a prob. space.$(\Omega,\mathcal{F},P)$. As a hint, use $X_{n}Y_{n}-XY=(X_{n}-X)(Y_{n}-Y)+X(Y_{n}-Y)+Y(X_{n}-X)$.

My question pertains only to reasoning for $X(Y_{n}-Y)\xrightarrow{P}0$

Let $\epsilon > 0$ and $n > 0$

$P(|X(Y_{n}-Y)|>\epsilon)=P(|X||Y_{n}-Y|>\epsilon)=P(|X||Y_{n}-Y|>\epsilon,|X|\leq n)+P(|X||Y_{n}-Y|>\epsilon,|X|> n)(*)$

Everything is fine until this point, however, I do not understand the next inequality, namely:

$(*)\leq P(n|Y_{n}-Y|>\epsilon,|X|=n)+P(|X|>n)(**)$

Then we go on to say,

$(**)\leq P(n|Y_{n}-Y|>\epsilon)+P(|X|>n)=P(|Y_{n}-Y|>\frac{\epsilon}{n})+P(|X|>n)(***)$ and since $Y_{n} \xrightarrow{P} Y$

$(***)\to P(|X|>n)$

The solution then immediately implies:

$\limsup_{n\to \infty}P(|X||Y_{n}-Y|>\epsilon)=0$ which I do not understand in two parts:

$1.$ Why are we using $\limsup_{n\to \infty}$ all of a sudden rather than $\lim_{n\to \infty}$? I have only ever used $\lim_{n\to \infty}$ on probability distributions.

$2.$ Our estimations must imply that $\limsup_{n\to \infty} P(|X|>n)=0$ but what is the exact reasoning behind this. I mean intuitively it makes sense, given that $\{|X|>n\}$ is increasingly likely to have measure $0$ as $n \to \infty$ if $X$ has certain characteristics, but we know nothing about $X$ other than the fact that $X_{n} \xrightarrow{P} X$.

If anyone could answer $(*),1.$ and $2.$ I would be extremely grateful.

1

There are 1 best solutions below

1
On BEST ANSWER
  1. We use a $\limsup$ because we do not know whether the limit exists, whereas $\limsup$ always exists. Note that since your sequence is nonnegative, it goes to zero iff zero is its limsup.

  2. $X$ is a random variable that is as finite. Thus $P(|X| > n)$ is a decreasing sequence with limit $P( \forall\,n |X| > n)=P(|X|=\infty)=0$.

$(*) \leq (**)$ is false. However, $(*) \leq (***)$ holds.

This is because the event $(|X||Y_n-Y| > \epsilon, |X| \leq p)$ is contained in the event $(p|Y_n-Y| > \epsilon)$, and because the event $(|X||Y_n-Y| > \epsilon, |X| > p$) is contained in the event $(|X| > p)$.

Note that for the proof to work, your lower bound on $X$ should be different from the index in the sequence $Y$. The argument reads as follows:

By the same computations, $P(|X||Y_n-Y| > \epsilon) \leq P(|X| > p)+P(|Y_n-Y| > \epsilon/p)$ for every $n,p$ thus the limsup of the LHS is not greater than $P(|X| >p)$ for every $p$. Since this can be arbitrarily close to $0$, the limsup, hence the limit, is $0$.