Difference between optimisation on manifolds and Lagrange multipliers

164 Views Asked by At

I have few reference I'm currently reading through but I still don't quite get the difference between optimising a function over a manifold and simply use constrained optimisation.

Do the algorithms end up being simpler maybe in the manifold cases?

Taking as example the line direction (i.e. gradient descent). In the constrained case I would add all the Lagrange multipliers and check the KKT conditions hold, while on a manifold I would just implement the retraction operator (I'm over simplifying I know...)

But apart from this difference I don't really see an advantage of one over the other.

Is there a computational advantage maybe? I know that a "large class" of constrained optimisation problems can be rephrased as optimisation over manifold but why would I want to do it? I'm pretty sure there's some subtlety I'm missing but so far I wouldn't really bother learning manifolds/riemannian manifolds just for a different framework.

1

There are 1 best solutions below

1
On

The optimization on the manifold is a special class, where constraints must satisfy the property of the manifold. No matter how complex the objective function is, we can treat it as a unconstrainted optimization problem when optimizing on the manifold with the Euclid gradient replaced by the manifold gradient.

However, the points that satisfy KKT conditions may not be checked easily when the objective function is very complex.

In my opinion, we should choose a different method according to the concrete problem.