The searching I have done on questions about hypothesis testing of the significance of the sample means turns up many sites that describe the process but don't explain why the particular algebraic manipulation are performed. Considering a specific example, if the underlying distribuion being tested is normal (*) with a given (population) mean and variance/standard dev. to be tested, I cannot find a link that justifies changing variance of population distribution to variance / (sample size) then working with this modified distribution to test the sample mean. (Hopefully that is clear enough - sorry if the terminology I have used is a bit "off" but that shows my struggles to understand this topic).
Does anyone have a link that justifies this something of the man of the "underlying population distribution".
** Example Added **
In this video: On the distribution of sample means (just one example of many), the transformed distribution :$\overline{X}\sim{N}(\mu,\frac{{\sigma}^2}{n})$ is examined. Where is there a justification or explanation or proof to divide ${\sigma}^2$ by n (sample size) in this definition of the distribution of the sample mean, $\overline{X}$? Although I like these set of teaching videos, everywhere I have looked (not far enough I'm sure), the "transformation" of the variance from the original distribution of X to the distribution of $\overline{X}$ is asserted, but not really justified.
(*) Is it also true that whatever the underlying population distribution is, the sample means will be assumed to be normally distributed?
A proof of the mean and variance of the sample mean.