I was wondering how you can calculate the upper limit of a probability using only a mean. Without using a sample size or variance.
Example:
The average score for a test is 60 out of 100.
Calculate the upper limit for the probability that a student will score more than 80 out of a 100.
The answer is apparently $\frac{3}{4}$ but I cannot find any explanation how this is calculated.
Every explanation that I can find online always uses a mean, variance and some sort of distribution. In this example you have only a mean and yet somehow it is still possible to find an answer.
Am I missing something super obvious or is the writer of this exercise breaking a fundamental law?
Use Markov's Inequality (proof similar to Chebyshev, see Wikipedia on 'Markov Inequality') states: For a random variable $X$ with $P(X > 0) = 1$ and $E(X) = \mu,$ $$P(X \ge a) \le \mu/a.$$ Use $\mu = 60,\, a = 80$ to get the stated answer.
Because this inequality holds for a large variety of distributions, you can't expect the bound to be very good in general. For example, if $Y$ is normal with $\mu = 60,\,\sigma = 10,$ we would have $P(Y \ge 80) \approx 0.023 < .75.$ [Technically, to apply Markov's Inequality here, the normal distribution would have to be truncated to ignore the tiny probability $P(Y < 0).]$
Addendum: Sketch of proof for a continuous density. Proof for a discrete distribution is similar, but using sums instead of integrals:
$$\mu = E(X) = \int_0^\infty xf_x(x)\,dx \ge \int_a^\infty xf_X(x)\,dx \ge \int_0^a af_X(x)\,dx = aP(X \ge a).$$
Notice that the first inequality uses the assumption that $P(X > 0) = 1.$