How to calculate the Riemannian gradient in practice (optimization)?

246 Views Asked by At

In a couple of papers, I saw steepest descent defined by $\mathbf{v} = -\mathbf{G}_p^{-1}\nabla f(\mathbf{p}) \in T_x\mathbb{R}^n$ where $\mathbf{G}$ is the Riemannian metric tensor of an manifold. The next step was then to orthogonally project $\mathbf{v}$ onto $\mathbf{v}' \in T_x M$ which they defined as Riemannian gradient.

Could somebody explain the relationship between the gradient in $T_x\mathbb{R}^n = \mathbb{R}^n$ and $T_x M$ (with respect to the examples below)? I don't have a background in differential geometry or physics. I am only optimizing some functions in "simple" manifolds via gradient descent.

To make this question a bit more concrete:

  • Hyperboloid model: metric tensor is the identity matrix except the first component is $-1$. The tangent space at $x$ of the manifold is the orthogonal complement of $x$. We have to make $\mathbf{v}$ and $\mathbf{p}$ orthogonal.
  • Poincare unit ball: $\mathbf{v} = -\left(\left(\frac{2}{1 - \lvert\lvert \mathbf{x}\rvert\rvert^2}\right)^2\mathbf{I}_n\right)^{-1} \nabla f(\mathbf{p})$. What is the tangent space of the Poincare unit ball? Is an orthogonal projection also necessary?