Limiting distributions of non-overlapping sums are independent?

210 Views Asked by At

I'm being stuck in the following statement in Billingsley's Convergence of Probability Measures. Let $[・]$ denote the integer part of the argument. Assume that $\{\xi_n\}$ is a sequence of iid r.v.'s with mean zero and variance finite. Define $S_n = \xi_1 + \cdots + \xi_n.$ Define also $X_t^n$ as $$ X_t^n = \frac{1}{\sqrt{n}}S_{[nt]} + (nt - [nt])\xi_{[nt] + 1}.$$ Then, we have the following: for $s \leq t$, $$(X_s^n, X_t^n - X_s^n) \Rightarrow (N_1, N_2) \ \ (n \to \infty),$$ where $N_1$ and $N_2$ are independent and normal.

I have two questions:

  1. Why are $N_1$ and $N_2$ independent?
  2. Why can we say that they jointly converge in distribution?

I notice that $X_s^n$ and $X_t^n - X_s^n$ are non-overlapping partial sums (up to the interpolating term) of the independent sequence, so that they are independent for all $n$. I know from the CLT that each of the partial sums converges in distribution. But I also know that even if $\{X_n\}$ and $\{Y_n\}$ are two independent sequences(i.e., $X_n$ and $Y_n$ are independent for all $n$), their limiting distributions are not necessarily independent. So I can't see why $N_1$ and $N_2$ are independent under the above conditions.

I'm also wondering about the joint convergence of $X_s^n$ and $X_t^n - X_s^n$. I have only seen the Lindeberg-Levy CLT states that a single partial sum converges in distribution. Or can we derive the joint convergence of non-overlapping partil sums?

Thanks a lot !

Edit: I think question 2 will be solved if the weak limits of $X_s^n$ and $X_t^n - X_s^n$ are independent. So, an answer only to question 1 will be of great help.

1

There are 1 best solutions below

1
On BEST ANSWER

As you guessed, the fact that we obtain at the limit a vector of independent random variables comes from this special setting. To see this, we use the Cramer-Wold device: we have to show that for each real numbers $a$ and $b$, $aX_s^n+b(X_t^n-X_s^n)$ converges in distribution to $aN_1+bN_2$, where $N_1$ and $N_2$ are independent normal. Since $N_1$ and $N_2$ are Gaussian and independent, $aN_1+bN_2$ has a normal distribution with mean zero and variance $a^2s +b^2(t-s) $.

One can show that $aX_s^n+b(X_t^n-X_s^n)$ behave like $Y_n:=\frac1{\sqrt n}\left(aS_{[ns]}+b(S_{[nt]}-S_{[ns]})\right)$ and a use of the central limit theorem under Lindeberg's conditions for an array of rowwise independent random variables shows that the former expression converges to a Gaussian random variable whose limit is the limit of the variance of $Y_n$, which is indeed $a^2s +b^2(t-s) $.