Nonlinear conjugate gradient descent for maximization

128 Views Asked by At

I have a multivariable function that I have defined the cost function and its gradient with respect to the variable vector. Let's call the cost function $f(\overrightarrow{x})$, and variable vector $\overrightarrow{x}$. I have been using nonlinear conjugate gradient descenet method for minimization. Algorithm is as follows: $$k=0\\x=0\\ g_0=\nabla_{\overrightarrow{x}}f(x_0 )\\\Delta x_0=-g_0$$ $$x_{k+1}\leftarrow x_k + t\Delta x_k \\ g_{k+1}\leftarrow\nabla f(x_{k+1})\\ \Delta x_{k+1}\leftarrow-g_{k+1}+\gamma\Delta x_k$$

I know that I need to update formula so the solution "ascends" instead of descension. How can I update the algorithm? Also I wonder how the wolfe condition changes for maximization problems. Thank you and have a nice day.

1

There are 1 best solutions below

1
On

Convert your function from maximization problem to minimization by setting $f_{new}(x)=-f(x)$ and run default conjugate gradient for minimization.

Since you had specifically asked how the formulas and wolfe conditions update you can plug in $-f(x)$ for function values and $-\nabla f(x)$ for gradients and see for yourself.