For S.O.C. of constrained optimization problem, why need a bordered hessian?

786 Views Asked by At

If we have a constrained optimization problem $$\max_xf(x_1,...,x_n), \text {s.t.} g(x_1, ...,x_n)=0$$ Then we can use Lagrange's theorem and use these as first order conditions, to find the critical points of the lagrangian.

But I am now reading that as Second order conditions for a maximum we need: $$v^TD^2L(x^*, \lambda ^*)v<0, \forall (v\neq 0)\in \mathbb R^n \text {for which } Dg(x^*)v=0$$

My lecture notes then go on to say that to find points that satisfy this condition, we need to construct the "bordered Hessian", and check the sign of the "last n-m" leading principal minors.

I have two questions:

  1. Is there an intuitive explanation for why this condition on the bordered hessian implies that the S.O.C. for a maximum is met? How does it relate to positive definiteness of the bordered hessian for example?

  2. Are there other ways to check whether these conditions are met? e.g. can we also check the eigenvalues of the bordered hessian ?