Difference between First order Lagrange and second order Lagrange in Optimisation?

213 Views Asked by At

So, I'm studying maths in French and in my notes there is what's called a second order Lagrange that has these following conditions ( I couldn't find the equivalent in the internet so I'm translating them and hopefully I get them right ) :

let $x* \in A:=${$x \in \Omega : g_j(x)=_j , j=1,..,m$} be a local solution and let's suppose that :

1/ $f$ is two times differentiable in $x^*$.

2/ $g$ is of class $C^2$

3/ the family $\nabla g_1(x^*),...,\nabla g_m(x^*)$ is lineary independent

then there exists $\lambda:=(\lambda_1,..,\lambda_m) \in \Bbb R^m$ such that :

$\nabla_x L(x^*,\lambda) = \nabla f(x^*) - \sum_{j=1} ^m \lambda _j \nabla g_j(x^*)=0$ and $\partial _{xx}^2L(x^*,\lambda)(h,h) \ge 0$ for every $h \in ker(g'(x^*))$

What I don't get is :

why do we need the second order, it doesn't feel much different than the first order except for the positive second partial derivative?What am I missing here?

( Also, what's the equivalent of this theorem in English because I couldn't find it )

1

There are 1 best solutions below

0
On BEST ANSWER

Those are Karush–Kuhn–Tucker conditions. First order condition is necessary. It includes saddle points.

Second order conditions are sufficient. They make sure your solution is actually a local minimum. The term $h \in ker(g'(x^*))$ means that you test the second order directional derivative only in directions that satisfy constraints.