Two dimensional central limit theorem

264 Views Asked by At

Let $(X_i )_i$ be continuous i.i.d. random variables in $\mathbb{R}^2$ and $\mathbb{E}\left[\|X_1\|^2\right]<+\infty$. What is the central limit theorem in this case? I was wondering if someone could give me a reference about the central limit theorem for two dimensional random variables.

2

There are 2 best solutions below

0
On

Let us write $X_i = (X_{i,1}, X_{i,2})$, assume that $\mathbb{E}[X_{i,1}]=\mathbb{E}[X_{i,2}]=0$, and denote by $S = (S_{k,l}), 1 \leq k,l \leq 2$ the $2 \times 2$ covariance matrix of this vector, so that $S_{k,l} = E[X_{1,k}X_{1,l}]$. Then the theorem states that, as $n \to \infty$, the law of $Y_n := (X_1 + \dots + X_n)/\sqrt{n}$ converges weakly to the Gaussian centered law with covariance matrix S.

The outline of the proof is similar to the one-dimensional case: we first show that the laws of $(X_1 + \dots + X_n)/\sqrt{n}$ are tight, and then we identify the unique subsequential limit by seeing that for all $\alpha_1, \alpha_2 \in \mathbb{R}$, we can apply the one-dimensional CLT to $\alpha_1 Y_{n,1} + \alpha_2 Y_{n,2}$ to see that it converges to a centered Gaussian random variable with variance $\sum_{1 \leq k,l \leq 2} \alpha_1 \alpha_2 S_{k,l}$. This characterizes the limit of the laws as the 2-dimensional Gaussian centered law with covariance matrix $S$.

Of course, this generalizes directly to any dimension.

0
On

There is a multivariate version of the Central Limit Theorem. Here is a good introduction.

The main problem that arises in the multivariate setting is that we need to consider the dependencies of the components of a random vector, not just its marginal distributions. Consider two examples:

Let $X=(X_1,X_2)$ with $X_1,X_2$ independent and identically distributed and $\widetilde X=(\widetilde X_1,\widetilde X_1)$. Suppose we have $\mathbb E|X_1|^2,\mathbb E|X_2|^2,\mathbb E|\widetilde X_1|^2<\infty$ so that the central limit theorem is applicable. Note that the vector $\widetilde X$ is concentrated on the diagonal $\{x_1=x_2\}$ while the vector $X$ lives on all of $\mathbb R^2$.

Therefore, we have no reason to expect the limiting distribution of sums of i.i.d. copies of $X$ and of i.i.d. copies of $\widetilde X$ to be the same.

The key here is measuring their dependencies by their covariances. In the case of $X$ we have $$\text{Cov}(X)=\begin{pmatrix}\text{Var}(X_1) & 0 \\0 & \text{Var}(X_2)\end{pmatrix}$$ while in the case of $\widetilde X$ we have $$\text{Cov}(\widetilde X)=\begin{pmatrix}\text{Var}(\widetilde X_1) & \text{Var}(\widetilde X_1) \\\text{Var}(\widetilde X_1) & \text{Var}(\widetilde X_1) \end{pmatrix}.$$

Hence the covariances of the multivariate Gaussians in their corresponding central limit theorems must reflect this difference and have exactly those covariances.