Relation between the convergence in distribution and convergence in probability

134 Views Asked by At

Let $Z$ be a poisson with parameter $\mathbb{E}Z$. Also, I have another random variable $G$ which converges in distribution to $Z$ with an error bound, given as,

$$ d_{TV}(G,Z)\leq c,$$

where $c$ is a constant and $d_{TV}$ is the total variation distance.

Now I want to upper bound the following probability using the information I presented above on the convergence in distribution.

$$P( \left| G-Z \right| >t),$$ for $t>0$ real and fixed.

However, I don't know how to use it in this case because I am trying to proof something using convergence in probability.

1

There are 1 best solutions below

0
On

I'm not sure that it's necessarily tight, but you can get somewhere. It can be shown that $d_{\text{TV}}(X,Y) = d$ implies that there exists a coupling $\mathbb P$ of the two random variables $X$ and $Y$ (on a common probability space) with the property that $\mathbb P(X \ne Y) = d$ (or it might be $\le d$). You can find this in any standard text on Markov chains and mixing times, eg Section 4.2 of Levin--Peres--Wilmer's Markov Chains and Mixing Times.

If $X$ and $Y$ are actually equal, then we certainly have $|X-Y| \le t$ for all $t \ge 0$. We can then do a crude upper bound $$ P(|X - Y| > t) \le P(X \ne Y) \le c. $$