I'm currently working on exam review for an upcoming statistics exam and I've managed to dig myself to far into the theoretical background of basic statistical principles. I'm currently looking at the definition of variance and standard deviation.
I understand that when given a sample, lets say $S=\{s_1, s_2, s_3, s_4, ...., s_n\}$ from an observation I can calculate the following $$d(S,\bar{S})=\sqrt{(s_1 - \bar{S})^2 + (s_2 - \bar{S})^2 +\ ...\ + (s_n - \bar{S})^2} = \sqrt{\sum_{i=1}^n(s_i - \bar{S})^2}$$
which is the Euclidean distance or as I interpret it, the combined deviation of all my observations.
My question is then, is the standard deviation the same as the average deviation? Also, could someone explain to me the reasoning for dividing by $n - 1$ instead of $n$ without involving the use of moments? And why do we divide inside the squareroot?
The standard deviation is the square root of the average squared distance from the mean. From that definition, the average (division by n or n - 1) is determined before taking the square root. Without the square root, the value is the variance. For smaller samples, the division by n-1 is a more accurate estimate of the population standard deviation.
Average deviation is simply the average distance from the mean.