Information-theoretic interpretation of bijections

195 Views Asked by At

Let us consider two sets $A$ and $B$. Suppose there exists a bijection $\phi:A\to B$ so that any $b\in B$ uniquely determines an element $a = \phi^{-1}(b)\in A$. In this sense, we can claim that the set $B$ gives us the `full information' of $A$ since every element in $A$ can be characterized by the element in $B$.

My question is, how can we interpret this bijection between two sets using information theory? How should we define probability distributions on $A$ and $B$ and use information theory to say `there is a bijection between $A$ and $B$'?

1

There are 1 best solutions below

3
On

Saying that $X$ is a random variable and $Y=f(x)$ where $f$ is a (measurable) bijection is equivalent to say that the conditional entropies are equal to zero, $H(Y|X)=H(X|Y)=0$, this also implies that the mutual information is equal to their entropies $I(X,Y)=H(X)=H(Y)$.