Understanding Standard Error ($\frac{\sigma}{\sqrt{n}}$) from the definition

1.6k Views Asked by At

In the "normal" sense, I have understood standard error to be the standard deviation of the means. Hence, you would need to calculate multiple averages ($\bar{x}_1,..., \bar{x}_n$) and calculate the standard deviation (let $u$ be the average of the average):

$$SE = \sqrt{\frac{\sum\limits_{i=1}^{n} (\mu - \bar{x}_i)^2}{n-1}}$$

I do not understand how this eventually becomes $(1) \ SE = \frac{\sigma}{\sqrt{n}}$ . Afterall, don't you need multiple averages to get this value? How are we able to calculate it with just one set of data? An explanation on how $(1)$ is derived would be greatly appeciated.

1

There are 1 best solutions below

2
On BEST ANSWER

If the $X_i$ are i.i.d. with expectation $\mu$ and variance $\sigma^2$, and you take a sample size $n$ then the mean $\bar X$ is distributed with expectation $\mu$ and variance $\dfrac{\sigma^2}{n}$. The square root of that variance is then the standard error of the mean and is $\sqrt{\mathbb E\left[(\bar X-\mu)^2\right]} = \sqrt{\dfrac{\sigma^2}{n}}=\dfrac{\sigma}{\sqrt n}$.

If you do not know $\sigma^2$ then you may want to estimate the standard error from your observations. Your expression is a possible approach based on taking multiple samples, so if you took $m$ of these (not the same as $n$) then you could use $\sqrt{\dfrac{\sum\limits_{j=1}^{m} \left( \bar{x}_j - \overline{\bar x}\right)^2 }{m-1}}$, but this would involve $mn$ observations as well as averages of averages.

Alternatively, you could use one sample size $n$ to estimate $\sigma^2$ perhaps with $\dfrac{1}{n-1}\sum\limits_{i=1}^{n} ({x}_i-\bar x )^2$, and thus estimate $\dfrac{\sigma}{\sqrt n}$ using $\sqrt{\dfrac{1}{n(n-1)}\sum\limits_{i=1}^{n} ( {x}_i-\bar x )^2}$.