I know that it is possible to use conjugate gradient to solve minimization problems, but I can't quite grasp my head around how it works.
I have a vector $x$ that I wish to project onto a constraint $c(x)=0$.
I have previously solved this problem iteratively using gradient descent where I update $x$ as follows:
$x := x - (\nabla_x c) c$
It should be noted that c is not a quadratic equation (though I guess I could turn it into one if need be, but right now it is just a simple equation that can be both positive and negative).
Supposedly I should be able to do a similar iterative projection with conjugate gradient but I don't quite see how (looking at the algorithms for CG it seems like I need a matrix A, which I'm assuming is somehow my constraint c, but I can't quite make sense of it)
I finally figured out what I was missing. I needed to look at non-linear conjugate gradient algorithms rather than the normal linear ones: https://en.wikipedia.org/wiki/Nonlinear_conjugate_gradient_method
This method is reliant on f behaving approximately quadratic so I will likely need to use the normal equation $f = c^\top c$ for all this to work.