I am fairly new to machine learning, but I have a 22-dimensional dataset, which I would like to increase the interpretability of by dimension reduction. I am relatively familiar with principal component analysis (PCA), but I suspect a 22-dimensional space might be tricky to manage.
I've been told to seek guidance in information geometry, in order to find specific manifolds that would help me, for lack of a better expression, improve the categorization and dimensional reduction of my data. I was recommended the book Methods of Information Geometry, by Shun-ichi Amari, but I was wondering if you have any further reading recommendations to tackle such a problem. Thanks!