How to understand the covariance matrix in D dimensional gaussian distributions?

531 Views Asked by At

When looking at the covariance matrix of a D dimensional gaussian distribution it's intuitively clear that the diagonals have to be equal 1. However when trying to derive the bivariate gaussian for two independant gaussian variables P1 and P2 the diagonals become σ1 and σ2. My question is: How can the covariance matrix have only ones on it's diagonals in cases where the variance of P1 ands P2 are not unitary?

$P_1 = \frac{1}{(2\pi\sigma_1)^2} exp(\frac {-1}{2}(\frac{x_1-\mu_1}{\sigma_1})^2)$

$P_2 = \frac{1}{(2\pi\sigma_2)^2} exp(\frac {-1}{2}(\frac{x_2-\mu_2}{\sigma_2})^2)$

If P1 is independant of P2:

$P_{12} = P_1 \times P_2 \propto exp(X^T\Sigma^{-1}X)$

Where: $\Sigma = \begin{vmatrix} \sigma_1 & 0 \\ 0 & \sigma_2 \end{vmatrix}$

1

There are 1 best solutions below

0
On

You are mistaken -- the diagonal of the covariance matrix for a $d$-dimensional Gaussian distribution does not consist of $1$'s.

In general the covariance matrix $\Sigma$ of the random vector $(X_1,X_2,\ldots,X_d)$ is a $d\times d$ matrix with $(i,j)$ element defined by $$ \Sigma_{i,j}:=\operatorname{Cov}(X_i,X_j) $$ so the diagonal element at $(i,i)$ is $$ \operatorname{Cov}(X_i,X_i) = E(X_i^2) - E(X_i)E(X_i) = \operatorname{Var}(X_i). $$ Note that these values are not always $1$. For the case $d=2$ with independent variables $(X_1, X_2)$, the covariance matrix is $$ \Sigma = \begin{pmatrix}\operatorname{Var}(X_1) &0\\0&\operatorname{Var}(X_2)\end{pmatrix}. $$ This is true for any bivariate distribution, Gaussian or not.