Transformation(s) on a random vector to make its co-ordinates independent?

44 Views Asked by At

Let $X=(X_1\dots X_n) \in \mathbb{R}^n$ be a random vector with not independent co-ordinates. Then, must there exist a transformation (possibly invertible, so a diffeomorphism) $\phi: \mathbb{R}^n \to \mathbb{R}^n$ so that the co-ordinates $(Y_1 \dots Y_n)$ of $Y=\phi(X)$ are independent?

If $X$ is normal, of course, the answer is yes, as making the covariance $\Sigma_X$ isotropic will do it, as uncorrelated and independent are the same for normal distributions. But what about for an arbitrary random vector $X?$ I'd guess the general answer is no, but then is there a characterization of random vectors $X\in \mathbb{R}^n$ so that there exists a diffeomorphic transformation $\phi: \mathbb{R}^n \to \mathbb{R}^n$ so that the co-ordinates $(Y_1 \dots Y_n)$ of $Y=\phi(X)$ are independent?

1

There are 1 best solutions below

2
On

Let $F_i(x_1, ..., x_{i-1},\, \cdot\,)$ be the conditional cdf of $X_i$ given $(X_1, ..., X_{i-1}) = (x_1, ..., x_{i-1})$, i.e. \begin{equation} F_i(x_1, ..., x_i) = \mathbb{P}\left.\left( X_i \leq x_i \right|X_1 = x_1, ..., X_{i-1} = x_{i-1}\right). \end{equation} If the distribution functions $F_1, ..., F_n$ are continuous (e.g. when $(X_1, ..., X_n)$ has a density), $\phi(X_1, ..., X_n) = (F_1(X_1), F_2(X_1, X_2), ..., F_n(X_1, ..., X_n)) \sim (U_1, ..., U_n)$ where the $U_i$ are i.i.d. uniform.

The reason this holds is a combination of the probability integral transform and the chain rule. Let us consider the case $n=2$ for illustration. Write $F : \mathbb{R}^2 \to [0,1]^2$ for the map $F(x_1, x_2) = (F_1(x_1), F_2(x_1, x_2))$ and note that $F$ is invertible. The joint density of $(U_1, U_2)$ is given by \begin{equation} p_{U_1, U_2}(u_1, u_2) = p_{X_1, X_2}(F^{-1}(u_1, u_2)) \left\lvert \frac{\partial (U_1, U_2)}{\partial(X_1, X_2)}\right\rvert^{-1} \end{equation} for $u_1, u_2 \in [0,1]$. Now, the Jacobian term takes the form \begin{equation}\label{lol} \left\lvert\begin{pmatrix} p_{X_1} & 0 \\ \times & p_{X_2 | X_1} \end{pmatrix}\right\rvert = p_{X_1} p_{X_2 | X_1} = p_{X_1, X_2} \end{equation} where the precise value of $\times$ is irrelevant. Plugging this into our expression for the density of $(U_1, U_2)$ we get \begin{equation} p_{U_1, U_2}(u_1, u_2) = \mathbb{I}(u_1, u_2 \in [0,1]) \end{equation} as claimed.