According to Wikipedia, the standard error $\sigma^-_x$ of a sample mean can be computed by $\frac{\sigma}{\sqrt{n}}$, where $\sigma$ is the standard deviation of a statistical population and $n$ is the number of observations in the sample.
In the derivation of this equation follows from the variance of the sum of independent random variables. That is, if $x_1, x_2, ...$ are $n$ independent observations from a population with mean $\overline{x}$ and standard deviation $\sigma$, we can define $$T = (x_1 + x_2 + ... + x_n),$$ which gives $\mathrm{Var}(T) = (\mathrm{Var}(x_1) + \mathrm{Var}(x_2) + ...+ \mathrm{Var}(x_n)) = n*\sigma^2$.
What confuses me is the result $n*\sigma^2$. I thought that $\sigma$ is the population standard deviation and not the deviation of a single observation? That is, I thought, a population with only one observation should have $\sigma=0?$
Furthermore, $Var(T/n) = \frac{1}{n^2} Var(t)$ - why $n^2$ ??