I wonder if anyone would be able to help me with a confusion I have got myself into please.
Consider a fixed and given $n$ by $n$ matrix $M$ whose elements are chosen from $\{-1,1\}$. Consider also a random vector $v$ whose elements are chosen from $\{-1,1\}$.
We know that $H(Mv) = n$ if and only if $M$ is non-singular. Here $H$ is the Shannon entropy of a discrete random variable (in bits).
Here is my confusion. Each element of $Mv$ is a binomial random variable (actually a simple symmetric random walk) and so for large $n$ each element of $Mv$ should converge to a Gaussian if suitably scaled.
I think this means that $Mv$ converges to a multivariate Gaussian with covariance matrix $MM^T$. The differential entropy of a multivariate Gaussian is known to be linear in $\log(det(MM^T)$. See Theorem 9.4.1 from Cover and Thomas.
I know there is a correction factor one has to apply when going from discrete to continuous entropy (see Theorem 9.3.1 from Cover and Thomas). However, we would need this correction term to somehow deal with the fact that the entropy is constant as long as $det(M)>0$. That is it makes no difference to the entropy how small or large the determinant is as long as it is not zero.
What is the correction term that deals with this seeming contradiction?
Using the same derivation I used in my answer to your question here, we get that, the entropy of a discrete (lattice) distribution with "cell size" $\Delta$ is related to the differential entropy of the continuous distribution that approximates it by
$$ H_Y \to h_Z -\log \Delta$$
Now, it's a know result in multivariate analysis that the change in volume induced by a transformation $(s_1, \cdots s_n)=g(t_1, \cdots t_n)$ is measured by the Jacobian. In particular, if the transformation is linear $s =A t$ then the Jacobian is the determinant of the matrix: $J=|A|$.
In our case, we have $y=Mv$. The cells in $v$ space ($v_i \in \{-1,1\}$) have size $2^n$. Hence the cells in $y$ space have size $|M| \, 2^n$. Further, the differential entropy of a Gaussian with covariance $\Sigma=M M^t$ is $\frac{1}{2} \log[(2\pi e)^{n}|\Sigma|]= \frac{n}{2} \log(2\pi e)+\log |M|$
Then, assuming that CLT applies and that $|M|\ne 0$:
$$ H_Y \to \frac{n}{2} \log(2\pi e)+\log |M| -\log (|M| \, 2^n)= \frac{n}{2} \log(\pi e/2) $$
Hence, the entropy is independent of $|M|$, as it should be.