In an implication like $p \implies q$, is there some measure of how much information is lost in the implication? For example, consider the following implications, where $x \in \{0,1,\ldots,9\}$:
\begin{align} P &: (x = 6) \implies (x \equiv 0 \pmod{2}) \\ Q &: (x = 6) \implies (x \equiv 0 \pmod{3}) \\ R &: (x = 6) \implies (x \equiv 6 \pmod{8}) \\ \end{align}
I would argue that less information about $x$ is lost in implication $Q$ than in implication $P$: Given that $x \equiv 0 \pmod{3}$ and $x \in \{0,1,\ldots,9\}$, there are fewer possibilities for $x$ than if you were given $x \equiv 0 \pmod{2}$.
In implication $R$, I would argue that no information about $x$ is lost, since if $x \equiv 6 \pmod{8}$ and $x \in \{0,1,\ldots,9\}$, then you can conclude $x=6$. In general, any "if and only if" relation loses no information in the implication.
I'm familiar with the concepts of Shannon entropy and (vaguely) Kolmogorov complexity. Is there an analogous measure of information loss in an implication? Perhaps one of those two could be used in some natural way here?
(Note: I'm not asking about the Chinese remainder theorem. I'm just using modular arithmetic as an example.)
Shannon entropy, $H(X)$, can be intuitively thought of as the amount of uncertainty present in a random variable (r.v.). The conditional entropy $H(X|Y)$ is the amount of uncertainty remaining in r.v. $X$ given r.v. $Y$. This has some parallel with the information loss you described above. For example if $H(X|Y_1) < H(X|Y_2)$, then it can be considered that $Y_1$ has more information about $X$ than $Y_2$ since there is a greater reduction of uncertainty. Another measure that may be of interest to you is mutual information, which is $I(X;Y) := H(X) - H(X|Y)$. This quantity is the amount of information remaining in r.v. $X$ after we know $Y$ (and vice versa).
Unfortunately, these measure is only rigorous in a probabilistic setting. The entropy of deterministic events like $X=6$ where $\mathbb{P}(X=6)=1$ would be zero.
Hope this was useful.