Intuition behind entropy requirements in information theoretic structures

92 Views Asked by At

Let $X$ be a random variable that takes values in a finite set $\mathcal{X}$. For any $x\in\mathcal{X}$, let $p(x) = P[X = x]$ be the probability that $X$ takes the value $x$. The entropy of $X$ is defined as

$$H(X)=-\sum_{x\in\mathcal{X}}p(x)log(p(x))$$

where $0\cdot log0$ should be treated as being equal to zero. We say that the entropy $H(X)$ measures the uncertainty on the value taken by the random variable X. It's minimum value is $0$, and it is achieved if and only if there exists $x_0 \in \mathcal{X}$ such that $p(x_0) = 1$ which in other words means there is no uncertainty about the value that random variable takes, it is $x_0$ with probability $1$. It's maximum value is $H(X)=log(|\mathcal{X}|)$ and it is achieved if and only if the probability is distributed uniformly, that is $p(x)\frac{1}{|\mathcal{X}|}$, $\forall x\in \mathcal{X}$ (Uniformly distributed random variables are special variables that each value they take has equal probability to occur-flat pdf).

The notion of the entropy is too useful in information theoretic mathematics (that are applied too much in economic and computer science). Both branches of social silences, economic and computer science refer to shared information among a network of participants who communicate each other. They design different protocols where the agents be part of a multiparty computational scheme so as none of them could elicit by her own the secrets of each other, but it needs a group of player that is large enough to compute the part of the secret that each one shares. These schemes, have two requirements. The fist one is correctness and the second one is related to information-theoretic privacy.

$\textbf{Question 1:}$ Are the properties of correctness and information-theoretic privacy translated equivalently to the analogue conclusion that no agent has the possibility either to mislead other players, or to spy their private information?