I am reading the book Network Information Theory by El Gamal and I can't understand this particular part given in the book:
Let $X \sim f(x^n)$ be a random vector with co-variance matrix $K_X = E[(X − E(X))(X − E(X))^T] \geq 0$. Then $$h(X) \leq \frac{1}{2}\log ((2\pi e)^n|K_X|) \leq \frac{1}{2}\log((2\pi e)^n|E(XX^T)|)$$
where $E(XX^T)$ is the correlation matrix of $X$. The first inequality holds with equality if and only if X is Gaussian and the second inequality holds with equality if and only if $E(X) = 0$.
I can't understand how the author introduced the 2nd inequality with the correlation matrix. Is there any result that determinant of correlation matrix is always greater than equal to determinant of co-variance matrix?