I'm currently reading this paper on information theory and the brain. Within the text (p. 16) they say:
It is important to note that the above expression [the mutual information between a signal and it's noise-corrupted version] is only valid when both the signal and the noise are Gaussian. While this is often a reasonable and testable assumption, because of the central limit theorem, it is only an estimate and can underestimate the information (if the signal is more Gaussian than the noise) or overestimate the information (if the noise is more Gaussian than the signal).
What does it mean for something to be "more" Gaussian? How can this be measured?
The "more gaussian" thing here alludes to the property that, for a fixed variance, the gaussian density is that which maximizes the entropy. Hence, one could say that a random variable is more or less gaussian dependending on how high its entropy is, relative to the maximum.
Let $Y=X+Z$ where $Z$ is the noise, independent of $X$
Then $I(X;Y) = H(Y)-H(Y|X) = H(X+Z)-H(Z)$
Suppose that the signals have 0 mean and fixed variances $\sigma_X^2$ , $\sigma_Z^2$ ($\sigma_Y^2=\sigma_X^2+ \sigma_Z^2$)
Then $H(Z) \le \frac{1}{2} \log \sigma_Z^2 + \alpha $ and $H(X+Z) \le \frac{1}{2} \log \sigma_Y^2 + \alpha$ with the equalities attained if $X$ and $Z$ (and hence $Y$) are gaussian (here $\alpha = \frac{1}{2} \log 2 \pi e$)
In the gaussian case, then, we get the familiar
$$I(X;Y) = \frac{1}{2} \log \frac{\sigma_Y^2}{\sigma_Z^2} = \frac{1}{2} \log \left( 1 + \frac{\sigma_X^2}{\sigma_Z^2} \right)$$
Now, if $Z$ is gaussian but $X$ is not, then $Y$ is neither gaussian, and the second inequality is strict. Hence, the information is strictly less than the above.
In other cases ($Z$ not gaussian), things are a little more complicated, but the idea is the same.