Standard deviation formula: $$ \sigma=\sqrt{\frac{1}{N}\left[\left(x_{1}-\mu\right)^{2}+\left(x_{2}-\mu\right)^{2}+\cdots+\left(x_{N}-\mu\right)^{2}\right]}, \text { where } \mu=\frac{1}{N}\left(x_{1}+\cdots+x_{N}\right) $$ If I just want to measure the amount of dispersion from the mean, why can't it be just an average of the absolute differences of each term from the mean?
Something like this: $$ \frac{\left|x_{1}-\mu\right|+\left|x_{2}-\mu\right|+\ldots+\left|x_{N}-\mu\right|}N $$ Why is this not a good measure of amount of dispersion from the mean?
Why is standard deviation better?