Any measurement, say length of any object, will have some errors. The random errors that are present in the measurement can be reduced if we take mean of a large number of samples. This is because the standard deviation of mean is $m/sqrt(n)$ where $m$ is the mean of the n values we measured.
Although there is a simple mathematical proof available on wikipedia:
https://en.wikipedia.org/wiki/Standard_deviation#Standard_deviation_of_the_mean
Can anyone give a visual or intuitive reason why would uncertainty in mean be low?
Take a corn of rice and try to measure its weight with your standard scale in your bathroom.
Your result will not be very accurate as its weight is out of the bounds of your scale.
What can you do? Take 50.000 corns of rice (given all corn of rice have exactly the same weight), measure the result and divide it by 50.000. Now you have a value you can work with.
A bit more mathematical:
A measurement is modeled by $$ m_i = r + \varepsilon_i $$ where $r$ is the correct, real value and $\varepsilon_i$ some random error. $m_i$ is the result of the measurement.
This error is independent of the measurement, i.e. the error does not depend on the measurement and is "generated" new each measurement. Sometimes its positive, sometimes its negative but always of the "same" order, i.e. the same impact on the measurement. And if you sum up the errors, you will get something of the order $\sqrt{N}\varepsilon$. Where $\varepsilon$ is again an error of the above order. (You can proof this, if the errors are normal distributed with mean 0 and some fixed variance)
Now measure $N$ times and do the adding up: $$ \frac{1}{N}\sum_{i=1}^N m_i = \underbrace{\frac{1}{N}\sum_{i=1}^N r}_{=r} + \frac{1}{N}\underbrace{\sum_{i=1}^N \varepsilon_i}_{\approx \sqrt{N}\varepsilon} \approx r + \frac{1}{\sqrt{N}}\varepsilon $$