Background Information
I am studying the theorem of S.O.C.s for a constrained maximization problem. Here is the theorem on the book but lack of proof:
Theorem$\space\space\space\space$ Let $f$, $g_1$, $\dots$, $g_m$, $h_1$, $\dots$, $h_k$ be $C^2$ functions on $\mathbb{R}^n$. Consider the problem of maximizing $f$ on the constraint set \begin{equation} C_{g, h} \equiv \{\mathbf{x} | g_1(\mathbf{x}) \leq b_1, \dots, g_m(\mathbf{x}) \leq b_m, h_1(\mathbf{x}) = c_1, \dots, h_k(\mathbf{x}) = c_k\}. \end{equation} Form the Lagrangian \begin{equation} L(x_1, \dots, x_n, \lambda_1, \dots, \lambda_m, \mu_1, \dots, \mu_k) \\ = f(\mathbf{x}) - \lambda_1(g_1(\mathbf{x}) - b_1) - \dots - \lambda_m(g_m(\mathbf{x}) - b_m) - \mu_1(h_1(\mathbf{x}) - c_1) - \dots - \mu_k(h_k(\mathbf{x}) - c_k) \end{equation} (a) Suppose that there exist $\lambda_1^{*}, \dots, \lambda_m^{*}, \mu_1^{*}, \dots, \mu_k^{*}$ such that the following F.O.C.s hold at $(\mathbf{x}^*, \lambda^*, \mu^*)$: \begin{equation} \frac{\partial L}{\partial x_1} = 0, \dots, \frac{\partial L}{\partial x_n} = 0,\\ \lambda_1^{*} \geq 0, \dots, \lambda_m^{*} \geq 0,\\ \lambda_1^{*}(g_1(\mathbf{x}^*) - b_1) = 0, \dots, \lambda_m^{*}(g_m(\mathbf{x}^*) - b_m) = 0,\\ h_1(\mathbf{x}^*) = c_1, \dots, h_k(\mathbf{x}^*) = c_k. \end{equation} (b) For notation's sake, suppose that $g_1, \dots, g_e$ are binding at $\mathbf{x}^*$ and $g_{e + 1}, \dots, g_m$ are not binding. Write $(g_1, \dots, g_e)$ as $\mathbf{g}_E$. Suppose that the Hessian of $L$ with respect to $\mathbf{x}$ at $(\mathbf{x}^*, \lambda^*, \mu^*)$ is negative definite on the linear constraint set \begin{equation} \{\mathbf{v} | D\mathbf{g}_E(\mathbf{x})^*\mathbf{v} = 0 \space and\space Dh(\mathbf{x}^*)\mathbf{v} = 0\}; \end{equation} that is, \begin{equation} \mathbf{v} \neq 0,\space D\mathbf{g}_E(\mathbf{x})^*\mathbf{v} = 0,\space Dh(\mathbf{x}^*)\mathbf{v} = 0\space \Longrightarrow\space \mathbf{v}^T \cdot (D_{\mathbf{x}}^2L(\mathbf{x}^*, \lambda^*, \mu^*)) \cdot \mathbf{v} < 0. \end{equation} Then, $\mathbf{x}^*$ is a strict local constrained max of $f$ on $C_{g, h}$
Problem
The question asks for a complete proof of the above theorem for case of two variables and one inequality constraint.
My Attempt to Rewrite The Theorem
So here is my attempt. Instead of going directly to prove the theorem of the special case, I choose to rewrite the theorem for the case of two variables and one inequality constraint. But I got some trouble when rewriting the theorem for this special case. Since there is only one inequality constraint, whether it is binding or not makes difference. I cannot find a way to prove it in a general manner, like the statement in the theorem. Therefore, I rewrite the theorem as the following and I want to prove it case by case:
Theorem$\space\space\space\space$ Let $f$, $g$ be $C^2$ functions on $\mathbb{R}^n$. Consider the problem of maximizing $f$ on the constraint set \begin{equation} C_{g} \equiv \{(x, y) | g(x, y) \leq b\}. \end{equation} Form the Lagrangian \begin{equation} L(x, y, \lambda) = f(x, y) - \lambda(g(x, y) - b) \end{equation} Case 1: $g$ is binding at the maximum point $(x^*, y^*)$.
(a) Suppose that there exists $\lambda^*$ such that the F.O.C.s are satisfied at $(x^*, y^*, \lambda^*)$; that is \begin{equation} \frac{\partial L}{\partial x} = 0,\space \frac{\partial L}{\partial y} = 0,\space \frac{\partial L}{\partial \lambda} = 0,\space \lambda \geq 0\space\space at\space\space (x^*, y^*, \lambda^*);\space and\\ \end{equation} (b) \begin{equation} det \begin{pmatrix} 0 & \frac{\partial g}{\partial x} & \frac{\partial g}{\partial y}\\ \frac{\partial g}{\partial x} & \frac{\partial^2 L}{\partial x^2} & \frac{\partial^2 L}{\partial x \partial y}\\ \frac{\partial g}{\partial y} & \frac{\partial^2 L}{\partial y \partial x} & \frac{\partial^2 L}{\partial y^2} \end{pmatrix} > 0\space\space\space\space at\space\space\space\space (x^*, y^*, \lambda^*) \end{equation} Then, $(x^*, y^*)$ is a strict local constraint max of $f$ on $C_g$.
Case 2: $g$ is not binding at the maximum point $(x^*, y^*)$.
(a) Suppose that there exists $\lambda^* = 0$ such that the F.O.C.s are satisfied at $(x^*, y^*, \lambda^*(=0))$; that is \begin{equation} \frac{\partial L}{\partial x} = 0,\space \frac{\partial L}{\partial y} = 0,\space \lambda = 0\space\space at\space\space (x^*, y^*, \lambda^*(=0));\space and\\ \end{equation} (b) \begin{equation} det \begin{pmatrix} \frac{\partial^2 L}{\partial x^2} & \frac{\partial^2 L}{\partial x \partial y}\\ \frac{\partial^2 L}{\partial y \partial x} & \frac{\partial^2 L}{\partial y^2} \end{pmatrix} > 0\space\space and\space\space det \begin{pmatrix} \frac{\partial^2 L}{\partial x^2} \end{pmatrix} < 0\space\space at\space\space (x^*, y^*, \lambda^*(=0)). \end{equation} Then, $(x^*, y^*)$ is a strict local constraint max of $f$ on $C_g$.
I would like to know if my rewriting of the theorem for the special case of two variables and one inequality constraint is (1) correct and (2) necessary for proving the theorem. Could someone please help with it? Thanks a lot in advance!
My Attempt to Prove
Proof of Case 1$\space\space\space\space$ With single binding inequality constraint, I feel that the theorem reduces to a two variable one equality constraint case, except that we must have $\lambda^* \geq 0$ to guarantee that the optimal point is indeed a maximum.
Proof of Case 2$\space\space\space\space$ The proof of case 2 follows from the following theorem:
Theorem$\space\space\space\space$ Let $F:U \to R^1$ be a $C^2$ function whose domain is an open set $U$ in $\mathbb{R}^n$. Suppose that $\mathbf{x}^*$ is a critical point of $F$ in that it satisfies \begin{equation} \frac{\partial F}{\partial x_i}(\mathbf{x}^*) = 0\space\space\space\space for\space\space i = 1, \dots, n. \end{equation} Then, if the Hessian $D^2F(\mathbf{x}^x)$ is a negative definite symmetric matrix, then $\mathbf{x}^*$ is a strict local max of $F$.
I would like to know, if my thoughts are correct. I really appreciate any help!