Conditions for extending convergence in distribution of a subsequence to the entire sequence

141 Views Asked by At

Suppose I have a sequence for which I can prove convergence in distribution of some infinite subsequence, e.g. let $\{X_n: n \geq 1\}$ be the original sequence and suppose I can show that

$$ X_{n_k} \to X \textrm{ in distribution as } k \to \infty$$ for a subsequence $n_k$ satisfying $n_k \to \infty$ as $k \to \infty$.

Question: What is the weakest set of conditions on the entire sequence I can impose in order to be able to extend the convergence of the subsequence back up to the whole subsequence level?

I would imagine this would be true if we could bound the deviation of the elements between the elements of the subsequence, but I am just spitballing. For example, something like: for any $k$, $$ \sup_{n_k \leq n \leq n_{k+1}} \left| X_n - X_{n_k} \right| \to 0 \textrm{ in probability as } k \to \infty $$

1

There are 1 best solutions below

0
On

$\def\dto{\xrightarrow{\mathrm{d}}}\def\Pto{\xrightarrow{P}}\def\peq{\mathrel{\phantom{=}}{}}$A stronger proposition can be proved under your condition.

Suppose $\{X_t \mid t > 0\}$ is a sequence of random variables and $\{X_{t_n} \mid n \in \mathbb{N}_+\}$ is a subsequence such that $t_n \nearrow ∞$. If $X_{t_n} \dto X\ (n → ∞)$ and $\sup\limits_{t_n < t < t_{n + 1}} |X_t - X_{t_n}| \Pto 0\ (n → ∞)$, then $X_t \dto X\ (t → ∞)$.

Proof: Define $W_n = \sup\limits_{t_n < t < t_{n + 1}} |X_t - X_{t_n}|$, $C = \{x \in \mathbb{R} \mid F_X \text{ is continuous at }x\}$. Note that $\mathbb{R} \setminus C$ is at most countable, thus $C$ is dense in $\mathbb{R}$.

Consider fixed $x_0 \in C$. Take two sequences $\{y_m\}, \{z_m\} \subseteq C$ such that $y_m \nearrow x_0\ (m → ∞)$, $z_m \searrow x_0\ (m → ∞)$. Because for any $m \in \mathbb{N}_+$ and $t \geqslant t_1$, suppose $t_n \leqslant t < t_{n + 1}$, then\begin{align*} P(X_t \leqslant x_0) &= P(X_t \leqslant x_0) - P(X_{t_n} \leqslant y_m) + P(X_{t_n} \leqslant y_m)\\ &\geqslant -P(X_t > x_0,\ X_{t_n} \leqslant y_m) + P(X_{t_n} \leqslant y_m)\\ &\geqslant -P(|X_t - X_{t_n}| > x_0 - y_m) + P(X_{t_n} \leqslant y_m)\\ &\geqslant -P(W_n > x_0 - y_m) + P(X_{t_n} \leqslant y_m). \end{align*} Since $t_n \nearrow ∞$ and $W_n \Pto 0$, make $t → ∞$ to get$$ \varliminf_{t → ∞} P(X_t \leqslant x_0) \geqslant -\lim_{n → ∞} P(W_n > x_0 - y_m) + \lim_{n → ∞} P(X_{t_n} \leqslant y_m) = P(X \leqslant y_m), $$ then make $m → ∞$ to get$$ \varliminf_{t → ∞} P(X_t \leqslant x_0) \geqslant P(X \leqslant x_0). $$ Analogously,$$ \varlimsup_{t → ∞} P(X_t \leqslant x_0) \leqslant P(X \leqslant x_0), $$ thus $\lim\limits_{n → ∞} P(X_t \leqslant x_0) = P(X \leqslant x_0)$. Therefore, $X_t \dto X\ (t → ∞)$.