Given The feasible set $M[g,h] = \{x \in R^n : h_j(x)= 0, i\in I, g_j(x)≤0, j \in J\} $
we set $J_0 = \{j \in J : g_j(x)=0 \}$
let $f, h_i, g_i \in C^1, i\in I, j \in J$. and assume the following conditions are fulfilled for point $x \in M[g,h]$:
- The linear independence constraint qualifications (LICQ) hold in x
- $|I| + |J_0(x)| = n$
- KKT satisfied
show that x is a strict local minimizer.
My attempt:
I'm assuming that for a point to be local minimizer then $Df(x)=0$ and $D^2 f(x)$ is positive definite. I'm assuming that due to LICQ and KKT then $Df(x)=0$ but I can't prove how the second derivative is positive definite. Could anyone please help.
Your result is not true under the 3 conditions you put in your question.
Consider the function $f:\mathbb{R}^2\rightarrow\mathbb{R}$ defined by the expression $$f (x_1,x_2) = ({x_1} − {x_2}^2 )({x_1} - \dfrac{1}{2}{x_2}^2),$$ for all pair $(x_1,x_2)\in\mathbb{R}^2$.
First, the derivative of the function is zero at the origin, hence it is a KKT point for every feasible set possible. Second, considering the optimization problem $$\begin{array}{c c} \text{minimize}_{ (x_1,x_2)^{T} \in\mathbb{R}^2 } & f(x_1,x_2) \\ \text{subject to} & -x_1 \leq 0 \text{ and } -x_2 \leq 0, \end{array}$$the point $(0,0)^{T}$ is not a local minimizer, because $f(0,0)=0$, but $f(3/4 t^2, t)<0$, with $t>0$ and $(3/4 t^2, t)^{T}$ feasible. Hence, the origin is a KKT point satisfying your conditions that is not a local minimizer.
To correct your assumptions, let us define the problem $$\begin{array}{c c} \text{minimize}_{ \boldsymbol{x} \in \mathbb{R}^n } & f(\boldsymbol{x}) \\ \text{subject to} & \boldsymbol{x} \in M[g,h]. \end{array}$$ Now, let us assume our first hypothesys, i.e., that the KKT conditions hold at $\boldsymbol{x},$
for all $j \in J.$ Now, note, for minimality at $\boldsymbol{x}$, it's enough to happen that $$f(\boldsymbol{y}) - f(\boldsymbol{x})\geq \delta \|\boldsymbol{y} - \boldsymbol{x}\|,$$ for all $\boldsymbol{y} \in U \cap M[g,h]$ with $U$ being a neighborhood of $\boldsymbol{x}$, with $\delta>0$. It's not hard to see that it is enough to prove that $$-\nabla f(\boldsymbol{x})^{T} \boldsymbol d <0, \tag{*}$$ for all $\boldsymbol d \in T(\boldsymbol{x})\setminus \{\boldsymbol{0}\}$, where $ T(\boldsymbol{x})$ denotes the tangent cone. In order to use the KKT conditions to conclude it, let me define the index set of the positive multipliers, i.e., $$J^{+} = \{ j \in J : \mu_j>0\},$$ and consider two further hypothesis:
Now, note that, for all $\boldsymbol{d} \in T(\boldsymbol{x})$, we have $$\begin{align}-\nabla f(\boldsymbol{x})^{T}\boldsymbol{d} =& \sum_{j\in J_{0}}\mu_j^{k} \nabla g_{j}(\boldsymbol{x})^{T}\boldsymbol{d} +\sum_{i \in I} \lambda_i^{k} \nabla h_{i}(\boldsymbol{x})^{T}\boldsymbol{d} \\ =& \sum_{j \in J_0}\mu_j^{k} \nabla g_{j}(\boldsymbol{x})^{T}\boldsymbol{d}\\ =& \sum_{j \in J^{+}} \mu_j^{k} \nabla g_{j}(\boldsymbol{x})^{T}\boldsymbol{d}\\ \leq & 0. \end{align}$$ Hence, since $\nabla g_j(\boldsymbol{x})^{T}\boldsymbol{d}\leq 0$, for all $j \in J_0, $ the only way of being $-\nabla f(\boldsymbol{x})^{T}\boldsymbol{d}=0$, is $\nabla g_{j}(\boldsymbol{x})^{T}\boldsymbol{d}=0$, for all $j \in J^{+},$ which, by the hypothesis and $\boldsymbol{d} \in T(\boldsymbol{x})$, is only possible if $\boldsymbol{d}=\boldsymbol{0}.$ Hence, it holds $(*).$
Further, if you want something motivated by your 3 itens, you can assume the stronger hypothesis:
I was wondering if I could make a computable version of the Item 2 hypothesis, i.e.,
Those who know the Theorem of the Alternatives might be guessing something. The problem of the Theorem of the Alternatives and similar ones is that they do not include the case of nonzero variables without a sign associated with it. The Item 2 is a mixture of linear algebra and the theorem of the alternatives. In order to understand, consider one of the best known ones, the Farkas's Lemma:
Farkas's Lemma: Only one of the following system of equations have solution:
(1) $\exists x \in \mathbb{R}^{n} : Ax = b, x \geq 0$
(2) $\exists p \in \mathbb{R}^{m} : p^TA \geq 0^T, p^Tb < 0.$
Note that it's quite easy to see that, if (2) have a solution, then (1) cannot have one. Indeed, let $p$ be such that $p^TA \geq 0^T, p^Tb < 0$, suppose, by absurd, that (1) also have a solution. Hence, there is an $x$ so that $Ax = b, x \geq 0.$ Thus, multiplying this equation by $p$, we have $0 \leq p^{T}Ax = p^{T}b<0.$ Further, it is saying that, if the first one does not have a solution, then the second have. Since it requires separation theorems to derive this result, I will not prove it, just assume this is true.
The next part we will consider the Tucker's theorem of the alternative, which is derived from Farka's Lemma:
The Tucker's theorem of the alternative: Only one of the following system of equations have solution:
(1) $ \boldsymbol{0} \leq A\boldsymbol{x} \not = \boldsymbol{0},\ B \boldsymbol{x}=\boldsymbol{0}$
(2) $ A^T \boldsymbol{\mu} + B^{T} \boldsymbol{\lambda} = 0$ with $\boldsymbol{\mu} > \boldsymbol{0}$, $\boldsymbol{\lambda}$ free.
Using Tucker's theorem it's not hard to see that the next Result holds.
Result: The Item 2 is equivalent to the jabobian matrix of the active constraints having full rank $n$ at $\boldsymbol{x}$ and the existence of vectors $\boldsymbol{\mu}>\boldsymbol{0}$, $\boldsymbol{\lambda}$ free so that
$$-\sum_{j \in J_{0} \setminus J^{+}} \mu_j \nabla g_{j} (\boldsymbol{x}) + \sum_{j \in J^{+}} \lambda_j \nabla g_{j} (\boldsymbol{x}) + \sum_{j \in I} \lambda_i \nabla h_{i} (\boldsymbol{x})=\boldsymbol{0}.\tag{**}$$
Proof: I will only prove that Item 2 implies the full rank of the jacobian matrix of the active constraints and the system (**) has a solution, because the other way around is quite similar. Becuase there is no solution for the system
it also does not have solution for the following one:
Hence, the rank of the jacobian matrix of the active constraints must be $n$, since the kern of the jacobian of the active constraints is the trivial space, and thanks to the the Rank–nullity theorem, the rank must be $n$.
Now, becuase the following system is equivalent to the original one, it can't have a solution
Thus, Tucker's theorem of the alternative guarantees that (**) has a solution.