I want to compare the concept of ``precision of information'' between signals $x \in X$ and states $\omega \in \Omega$ defined by Blackwell and Shannon.
Denote the conditional probability distribution over signals given states by $\mathcal{P}_{X|\Omega}$ and the unconditional probability distribution over states and signals by $\mathcal{P}_\Omega$ and $\mathcal{P}_X$ respectively. Let the number of states and signals be finite. Blackwell says that the conditional probability distribution $\mathcal{P}_{X|\Omega}$ is more informative than another distribution $\mathcal{\tilde{P}}_{X|\Omega}$, $\mathcal{P}_{X|\Omega} \supset \mathcal{\tilde{P}}_{X|\Omega} $, if and only if there exist a Markov matrix $M$: \begin{equation} \mathcal{P}_{X|\Omega} M = \mathcal{\tilde{P}}_{X|\Omega} \end{equation}
For example, suppose the conditional probability distribution $\mathcal{P}_{X|\Omega} $ is given by the following Markov matrix: \begin{equation} \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \end{equation} Such a matrix is fully informative: The signal reveals that state with perfect accuracy. Suppose the matrix $M$ is written as: \begin{equation} M = \begin{bmatrix} 1/2 & 1/2 \\ 1/2 & 1/2 \end{bmatrix} \end{equation} The distribution $\mathcal{\tilde{P}}_{X|\Omega}$ is then very diffuse in the sense that the signals do not favour any state.
Shannon uses a different notion of ``informativeness''. Let entropy be given by function $H$. For the unconditional distribution over states, the entropy is:
\begin{equation}
H(\mathcal{P}_\Omega) = - \sum_\Omega \mathcal{P}_\Omega(\omega) \log_2 (\mathcal{P}_\Omega(\omega))
\end{equation}
Informativeness between signals and states is quantified by the mutual information of the unconditional probability distribution between signals and states:
\begin{equation}
I(\mathcal{P}_{X,\Omega}) = H(\mathcal{P}_\Omega) - \sum_X \mathcal{P}_X (x) H(\mathcal{P}_{\Omega|X} ( \ |x))
\end{equation}
In Shannon's notion of informativeness, $\mathcal{P}_{X,\Omega} $ is more informative than $\mathcal{\tilde{P}}_{X,\Omega} $ if $I(\mathcal{P}_{X,\Omega}) > I(\mathcal{\tilde{P}}_{X,\Omega}) $.
What is the relationship between the two concepts of informativeness is? Can one proof that an information structure which is more informative in the sense of one author is more informative in the sense of another author?
Let $P_X$ and $\tilde{P}_X$ be column stochastic matrices (experiments) of dimension $n_i \times |\Omega|$, $i=1,2$. If $\exists$ a column stochastic matrix $M_{n_1\times n_2}$ s.t. $P_X=M\tilde{P}_X$ then $\tilde{P}_X$ is said to be Blackwell more informative than $P_X$. Denote by $\geq_B$ this partial ordering on left stochastic matrices (though it is more common to work with right stochastic matrices, I'm keeping the setting as in the question).
It can be shown that $\tilde{P}_X \geq_B P_X \Rightarrow I(\tilde{P}_X) \geq I(P_X)$, i.e. Blackwell more informative implies higher Shannon Mutual Information, though the converse is not true.
The implication is straightforward (but tedious) by working through the algebra: take a $\tilde{P}_X$ and an $M$, get the expression for each $P_{X,ij}$ and replace in the definitions, working through the inequalities.
A counterexample for the converse can be found here: Rauh et al, 2017, Coarse-graining and the Blackwell Order (references [5] and [6]).