The variance of sample means is given by $\sigma ^2 / n$ where $n$ is the sample size.
Interestingly, this does not seem to depend on the size of the population — one would expect that the variance would be governed by some fraction of sample size to population size, but this is not the case.
Is there any intuitive reason for why this is true?
This isn't really an intuitive approach.
Let $X$ be a random variable such that $X \sim N(\mu,\sigma^2)$
If we have a sample of $n$ independent observations of $X$ then
$$X_1 +X_2 +X_3+ \dots +X_n \sim N(n\mu,n\sigma^2)$$
$$\text{sample mean} = \bar X = \frac{X_1 +X_2 +X_3+ \dots +X_n}{n} \sim N(\mu,\frac{\sigma^2}{n})$$
Note that $aX \sim N(a\mu,a^2\sigma^2$)