I have been looking at the Universal Manifold Approximation (UMAP) method for dimensional reduction or manifold learning in machine learning. The idea is to learn some lower dimensional manifold embedded within a higher dimensional ambient space. The method seems to proceed by the usual exploration of local embedding.
The one confusing thing is that the paper uses a bit of Riemann Geometry to suggest that a Riemann metric can vary locally on the manifold. Here is an excerpt from some of the documentation from the software package.
So it looks like the authors are creating simplices from the radius of balls about each point. And that at some points the ball has a larger radius than at other points. It seems that the basis of the metric is the nearest 1 neighbor, so the ball will grow until it finds that first nearest neighbhor.
I was hoping someone could explain what it means that a Riemann metric can "vary" over the manifold. I am not clear on what that means or how that works. So a metric is defined on the manifold, but that distance metric can change based upon where on the manifold it is applied?
Is there any intuition or bounds on this kind of idea. I imagine if the metric varies too much, then the manifold could just extend to cover $R^n$, without some limits.
UPDATE:
Here is some additional text from the paper.

