I'm working on a machine learning problem, where I have a set of $N$ real values, from which I need to compute the sample mean and the sample variance.
For specific reasons, I need to assume that these values are normally distributed around the sample mean with variance the sample variance.
The histograms of two such sets of values ($N=64$), along with the pdf of the respective normal distribution (using the sample mean and variance values) are shown below.
I know that the assumption about the normality of the data may be too strong, or even totally unjustifiable. What do you think?
Moreover, is there any method for getting rid of the outliers -- fitting a better Gaussian, maybe? I took a look at some ANOVA approaches, but I am not sure if they are applicable, or even appropriate, in my case.
Thank you.

