Entropy of a normal distribution in Bits versus Nats in book Elements of Information Theory

1.8k Views Asked by At

This should have been easy. Converting between nats and bits is a logarithmic change of base. So going from $\log$ base $e$ to base $2$, should require the denominator to have $\log_2(e)$. However in the book Elements of Information Theory, the formula is $$\frac{1}{2}\log(2 \pi e \sigma^2)\, {\rm bits}$$ And this is right after saying it is $$\frac{1}{2}\ln(2 \pi e \sigma^2) \, {\rm nats}$$

Now I'd like to think they just forgot the denominator, but this formula for normal distribution entropy in bits is repeated many times in the book (2nd edition). So I must be missing something. Do they ignore the denominator because log_2(e) is so close to 1 that it is ignored ??

1

There are 1 best solutions below

3
On BEST ANSWER

You surely know that

$$\log(a) = \frac{\ln(a)}{\ln(2)}$$

where, as usual in this context, $\log(\cdot)=\log_2(\cdot)$.

Hence, in general, if $H_e$ is the entropy in nats, then the entropy in bits is $H_2 = {H_e}/{\ln 2}$. Or, equivalently $H_e = {H_2}/{\log e}$

The differential entropy of a Gaussian variable is (in nats)

$$ H_e = \frac{1}{2}\ln(2 \pi e \sigma^2) $$

Hence, in bits it is $$H_2 = \frac{1}{2}\ln(2 \pi e \sigma^2) /\ln(2)=\frac{1}{2}\log(2 \pi e \sigma^2)$$

I'm not sure where is your problem.