I was asked the following question and thought it was interesting.
Is there an obvious way in which we can show the density of a 2D Gaussian "collapses" to a 1D density as the correlation $\rho \rightarrow 1.$ The intuition is that by having two variables $X$ and $Y$ that are perfectly correlated, we should see a function of just one of the random variables (since two of the same variable is redundant). For simplicity, we can take $\mathbb{E}(X) = \mathbb{E}(Y) = 0.$
More precisely, of course we cannot take $\rho = 1$ directly, since this makes the covariance matrix $\Sigma$ singular. But if we write the correlation as say $\rho = 1 - \epsilon$ and take $\epsilon \rightarrow 0$, can we see such a "collapse" in the density to the case of a single Gaussian random variable? I messed around with this for a while, but I couldn't get it to work.
Note that there is no reason why the covariance matrix cannot be singular. The only difference is that the distribution will not have a density with respect to the 2d Lebesgue measure. However, the general definition of a multivariate Gaussian distribution is a random vector $Z$ such that $\langle Z,v \rangle$ has a (1D) Gaussian distribution, for any vector $v$, which does assume anything about the covariance matrix. In particular, if $\rho=1$, then the $x$ and $y$ coordinates are perfectly linearly related, which implies that the distribution is of the form $Z=(X,aX+b)$ for $X$ a 1D Gaussian and some constants $a$ and $b$. Note that this distribution is concentrated on a line, and is multivariate gaussian because $\langle Z,v \rangle=(v_1+av_2)X+bv_2$ is Gaussian, for any $v\in\mathbb{R}^2$.