I have started reading this paper, and do not understand a line in its first paragraph in the introduction. He says:
Indeed, as far as the mean square error is concerned, Gaussian distributions represent already the worst case, so that in the framework of a minimax mean least square analysis, no need is felt to improve estimators for non-Gaussian sample distributions.
So far I know that in the non-parametric setting, the sample mean is a minimax estimator of the expected value of a random variable when using the squared loss, for the class of distributions with finite variance (see Bickel and Doksum, 2015, Example 3.3.4).
But, I do not understand what he means by "Gaussian distributions already represent the worst case".