The article by Nassim Nicholas Taleb says that using standard deviation is flawed and is poorly understood and mean absolute deviation should be used.
Because this article is short, and I don't fully understand the difference, could the wonderful and nice users here please explain the thesis of this article in more details and more clearly, please.
Taleb's answer is kind of a rant, but I can give a pro and con to mean absolute deviation (MAD). MAD is an example of a robust statistic, and among the many properties these have is resistance to outliers; i.e. large values in the sample will not overly influence MAD compared to SD. But a problem with MAD on a theoretical basis is that the statistical distribution of MAD is often very hard to know in a closed form, whereas there are often nice closed formulas for the distribution of SD. Taleb is arguing that the availability of lots of computing power renders the need for theoretical analysis less useful for applied use, and so robust statistics ought to be preferred. But being able to manipulate nice theoretical equations which well-approximate your problem is also useful (Taleb would likely deny SD has this property). So it's all debatable. I use them both!