If $X_n = Y_n + Z_n$ in distribution, and $X_n$ and $Y_n$ converge in distribution, does $Z_n$?

1.6k Views Asked by At

The random variables take values in $\mathbb{R}^d$.

I have tried to prove this using characteristic functions. Let $\hat{\mu}_{X_n},\hat{\mu}_{Y_n},\hat{\mu}_{Z_n}$ be the characteristic functions of the corresponding random variables. Let $X$ (resp. $Y$) be the distributional limit of $X_n$ (resp. $Y_n$), with characteristic function $\hat{\mu}_{X}$ (resp. $\hat{\mu}_{Y}$).

It is easy to show using the uniform convergence on compact sets of the $\hat{\mu}_{X_n}$ and $\hat{\mu}_{Y_n}$ that $\lim_{n \rightarrow \infty} \hat{\mu}_{Z_n}$ exists on some small ball $B_{\epsilon} := \{z \in \mathbb{R}^d : |z| \leq \epsilon\}$, and that the limiting function is continuous at $0$.

This is `nearly' enough to conclude, but the problem is, there is no way of checking that $\lim_{n \rightarrow \infty} \hat{\mu}_{Z_n}$ exists on the set $A:=\{z \in \mathbb{R}^d : \hat{\mu}_{Y}(z)=0\}$. This is why I get stuck.

Many thanks for your help.

EDIT: (Proposed solution)

What is said above is enough (c.f. Chung, page 170-171) to conclude that any subsequence of ${\mu}_{Z_n}$ has a further subsequence converging to a probability measure. It is enough to show that the characteristic functions of these potentially different limits are the same.

Suppose $f$ and $g$ are the characteristic functions of limits of subsequences of subsequences of ${\mu}_{Z_n}$. Then they are both continuous (by Arzela-Ascoli and contents of Chung Theorem 6.3.1). Moreover by the considerations above, $f$ and $g$ agree on the set $A^c:= \mathbb{R}^d \setminus A$ where in fact the limit of $\hat{\mu}_{Z_n}$ exists. It is therefore enough to show that any point in $A$ can be written as the limit of some points in $A^c$. But if this were not the case, $\hat{\mu}_Y$ would vanish on some small ball $D:= B(z_o; \epsilon)$ centered on $z_0 \in\mathbb{R}^d$ with radius $\epsilon$. Integrating shows therefore that

$$\int_{\mathbb{R}^d} \int_D 1 - \cos \langle z, x \rangle dz \mu_Y(dx) = 0 $$

and this is impossible unless $Y=0$ almost surely (I think) in which case the claim is trivial.

1

There are 1 best solutions below

9
On

I present a counterexample to the claim where all variables are defined on the same probability space (this is not a restriction! it says that even in that case one cannot expect the result to hold).

If everything is on a single probability space, we may rewrite the condition given in the question as: $Z_n = X_n + (- Y_n)$. It is classical to show that the weak convergence of $X_n$ and $-Y_n$ is not sufficient to deduce weak convergence of $Z_n$.

I give such a counterexample now: $X_n = W$, $Y_n = (-1)^nW$. Here $W$ is a random variable on a probability space $(\Omega, \mathcal{F},P)$ with $P(W=0) <1$ and $W \stackrel{d}{=}-W$; the last means that $W$ is symmetric, e.g. $W$ is distributed as $N(0,1)$, a standard normal distribution or even easier: $P(W=1)=P(W=-1)=1/2$.

Then $X_n \Rightarrow W$ and $Y_n \Rightarrow W$ trivially as $n\to \infty$. However, $$ X_n - Y_n = \begin{cases} 2W & \text{ if } n \text{ odd} \\ 0 & \text{ if } n \text{ even}. \end{cases} $$ This tells you that $Z_n = X_n-Y_n$ does not converge.

However, what you may deduce, is that $(Z_n: n \in \mathbb{N})$ is tight by Prohorov's theorem. What is missing to deduce convergence of $Z_n$ is some knowledge about the joint distributions $(X_n,Y_n)$, $n \in \mathbb{N}$.

EDIT: erased the point on "X_n and Y_n need to be defined on the same probability space" which was not true.