Bivariate mutual information for two Gaussian variables $X$ and $Y$ is well-known:
$$I(X;Y) = -\frac{1}{2}\ln(1-\rho^2)$$ where $\rho$ is the bivariate correlation coefficient.
Multivariate mutual information, according to Doquire and Verleysen (2012), is therefore
$$I(X_1; X_2;\dots ; X_n) = -\frac{1}{2}\ln (\det(\boldsymbol\Sigma)) $$ where $\boldsymbol\Sigma$ is the covariance matrix, and $\det$ is the determinant.
What is the connection between correlation and the determinant of the covariance matrix here? How does $1-\rho^2$ in the bivariate case, turn into $\det(\boldsymbol\Sigma)$? Please show steps of the derivation.
It appears to me, that the cited formula is wrong. Also, the authors of the paper refer to another paper, in which I didn't find the mentioned formula at all. If we define the mutual information as customary as the KL divergence between the joint distribution and the product distribution of the respective marginals then in the Gaussian case one gets \begin{equation} I(X_{1},\dots,X_{n})=-\frac{1}{2}\ln\frac{\det(\Sigma_{0})}{\det(\Sigma_{1})}, \end{equation} with $\Sigma_{0}$ being the covariance matrix of the multivariate Gaussian in question and $\Sigma_{1}$ the covariance matrix of the marginal product measure. Thus, $\Sigma_{1}$ is just the diagonal matrix with the same diagonal entries as $\Sigma_{0}$. And indeed in the two-dimensional setting, we recover your first formula since \begin{equation} \det(\Sigma_{0})=\sigma_{11}\sigma_{22}-\sigma_{12}^{2}\quad\text{ and }\quad \det(\Sigma_{1})=\sigma_{11}\sigma_{22}. \end{equation} The quotient thus reads \begin{equation} 1-\frac{\sigma_{12}^{2}}{\sigma_{11}\sigma_{22}} \end{equation} which is $1-\rho^{2}$.