derivation of standard error - Why can we assume all variances can be the same?

40 Views Asked by At

I'm learning how to derive standard error of the mean mathematically.

I understand that standard error is the square root of the variance of the mean:

$SE =\sqrt {Var((x_1+x_2+...+x_n)/n}$

And understand

$$SE = 1/n \sqrt {(Var((x_1+x_2+...+x_n))}$$ $$SE = 1/n \sqrt {(Var(x_1)+Var(x_2)+...+Var(x_n)}$$

But I don't know why all variances of x are the same. i.e. $ Var(x_1) = Var(x_2) = σ $

Texts say "each individual observation has the same variance as the other individuals" or "the $x_i$ are identically distributed, which means they have the same variance $σ^2$".

This is the step I feel doubtful. 1) Why can we assume all variances are the same? 2) Also, what do we mean by "variance of $x_1$, $x_2$", when the x's are only point estimates rather than a collection of raw data points (such as variance of x, variance of 30 heights)?

Thank you for answering.