I have a quantity that I want to measure, and I have obtained three sets of measurements A, B, and C, each represented by their mean $\mu_A$, $\mu_B$, $\mu_C$, and standard deviation $\sigma_A$, $\sigma_B$, $\sigma_C$, respectively. Assuming that the measurements follow a Gaussian distribution, I want to understand which set of measurements contains more information about the true value of the quantity being measured from an information theory perspective.
To give some context, let me provide numerical examples. Let's say that the true value of the quantity being measured is $x = 10$. Set A has measurements with mean $\mu_A = 9.9$ and $\sigma_A = 0.1$, set B has mean $\mu_B = 10$ and $\sigma_B = 1.0$, and set C mean $\mu_C = 8$ and $\sigma_C = 10$.
It's intuitive to say that it's set A due to the low standard deviation (assuming there's no bias), but how can I quantify the amount of information that each set of measurements provides about the true value (perhaps as $-\log_2 p(X)$)? Is there any way to compare the information content of the sets A, B, and C?
Thank you very much for your help!