Normalization and Neural Networks

41 Views Asked by At

I'm trying to implement a Neural Network classifier (using sklearn) and I read on multiple sites that for this type of classifier (multilayer perceptron) is usually better to normalize the dataset before the classification; I've tried the classification task with and without normalizing the dataset, and the accuracy is always equal or higher with the non-normalized dataset. I've also noticed that all the features of the dataset have a very similar probability distribution, centered at 0, and with similar variances. My question is: is the similarity in distribution of the features the reason why normalization doesn't improve (or even worsen) my classifier? Should I normalize even if this means reducing the accuracy of my classifier?