I'm implementing Newton's Method to solve a nonlinear program with general equality constraints, i.e.
\begin{equation} \underset{\mathbf{x}}{max} \ f(\mathbf{x}) \end{equation}
\begin{equation*} s.t \ \ \mathbf h(\mathbf{x}) = \mathbf{0} \end{equation*}
The Lagrangian is given by:
\begin{equation*} \underset{\mathbf{x}, \boldsymbol{\lambda}}{max} \ \mathcal{L}(\mathbf{x}, \boldsymbol{\lambda}) = f(\mathbf{x}) + \boldsymbol{\lambda}^T \mathbf{h}(\mathbf{x}) \end{equation*}
\begin{equation*} \mathcal{L}(\mathbf{x} + \Delta \mathbf{x}, \boldsymbol{\lambda}) \approx f(\mathbf{x}) + \nabla f (\mathbf{x})^T \Delta \mathbf{x} + \frac{1}{2} \Delta \mathbf{x}^T H(\mathbf{x}) \Delta \mathbf{x} + \boldsymbol{\lambda}^T (\mathbf{h}(\mathbf{x}) + J(\mathbf{x}) \Delta \mathbf{x}) \end{equation*}
In each iteration, we solve the first order conditions of the Lagrangian, shown below:
\begin{equation} H(\mathbf{x}) \Delta \mathbf{x} + J(\mathbf{x})^T \boldsymbol{\lambda} = -\nabla f (\mathbf{x}) \end{equation}
\begin{equation*} J(\mathbf{x}) \Delta \mathbf{x} = -\mathbf{h}(\mathbf{x}) \end{equation*}
Question: When $\mathbf{x}$ reaches a feasible point, i.e. $\mathbf{h}(\mathbf{x})=\mathbf{0}$, then the solution to the second first-order condition above will yield $\Delta \mathbf{x} = \mathbf{0}$, especially if $J$ has full rank, and the algorithm fails to make any progress. How do you resolve this?
I'm specifically testing my algorithm on problem #6 in Test Examples for Nonlinear Programming Codes:
\begin{equation} \underset{x_1, x_2}{max} \ f(x_1, x_2) = -(1-x_1)^2 \end{equation}
\begin{equation*} s.t \ \ \ 10(x_2 - {x_1}^2) = 0 \end{equation*}
with $\mathbf{x_0} = (-1.2, 1)$. My algorithm stalls at $\mathbf{x_k} = (-1, 1)$, and $\mathbf{x^*} = (1, 1)$
Thanks