I need to find optimal lagrangian multiplier vectors for a quadratic programming problem subject to three quadratic equality constraints and several other linear inequality constraints. I would like to use the KKT conditions for that. The objective function and all constraint are differentiable and all except one are convex. The non-convex quadratic constraint is of the form : $\\$ $\textbf x^T P x + 2c^Tx + s=0 ,\; x \geq 0$ where P is a symmetric diagonal matrix having both positive and negative diagonal elements. From my analysis since P is diagonal it's eigen values are essentially same as the diagonal entries. Since the diagonal enteries are both negative and positive, so are the eigen values and hence P isn't convex. This makes the function and consequently the problem non-convex and unsuitable for finding the optimal lagrangian multiplier vectors via the application of KKT approach.
I would like to know if there is any transformation for the function to make it convex one so that the KKT conditions guarantee optimal vectors? Or, is there any other technique to find the optimal lagrangian multiplier vector for such problems?
I assume you mean the constraint $x^T Px + 2c^Tx + s \leq 0$.
For intuition on the difficulty of this constraint, let us assume we also have constraints $x_i \in [0,1]$ for all $i \in\{1, \ldots, n\}$. Now consider your single constraint in the special case $P=-I$, $c=(1/2, \ldots, 1/2)$, $s=0$: $$ -x^Tx + 1^Tx \leq 0 $$ This is equivalent to saying: $$ \sum_{i=1}^n (x_i-x_i^2) \leq 0 \: \: \: \: (1)$$ But since $x_i\in[0,1]$, we know $x_i -x_i^2 \geq 0$ always. So all terms of the sum in (1) are non-negative. If the constraint (1) holds, it must mean that all terms are $0$. So $x_i-x_i^2=0$ for all $i$, which means $x_i \in \{0,1\}$. So your single constraint (together with the constraints $x_i\in[0,1]$) enforce binary constraints $x_i \in \{0,1\}$ on all of your variables. If this could be handled in a system where all other constraints are convex, it would be a general method for solving integer programs.
For these nonconvex problems, if $x^*$ is a local optimum then there exist KKT multipliers that satisfy a set of regularity conditions. For example, suppose your problem is: \begin{align} \mbox{Minimize:} \: \: & d^T x \\ \mbox{Subject to:} \: \: & Ax = b \\ & x^T P x + 2x^T c + s = 0 \end{align} where $d,b, c, s$ are given vectors of appropriate size, and $A$ and $P$ are given matrices of appropriate size. Assume the matrix $P$ is symmetric and invertible. If $x^*$ is a local minimum to the above problem, then there are KKT multipliers $\lambda$ and $\mu$ (where $\lambda$ is a vector and $\mu$ a scalar) such that:
1) (Stationarity condition) We have $\nabla [d^T x + \lambda^T Ax + \mu(x^T P x + 2c^Tx)]|_{x=x^*} = 0$. Since $P$ is symmetric, this means:
$$ d + A^T\lambda + 2\mu (Px^*) + 2\mu c = 0$$ If the matrix $P$ is invertible and $\mu \neq 0$ then we get: $$ x^* = P^{-1}\left[\frac{-2\mu c -d - A^T\lambda}{2\mu}\right] $$
2) (Primal feasibility condition) The vector $x^*$ must satisfy the desired constraints.
First assume $\mu \neq 0$. This means that we must search over all vectors $(\lambda, \mu)$ (with $\mu\neq 0$), then compute $\hat{x}(\lambda, \mu) = P^{-1}\left[\frac{-2\mu c -d - A^T\lambda}{2\mu}\right]$, then determine if the resulting $\hat{x}(\lambda, \mu)$ vector satisfies the desired constraints. Let $\mathcal{S}$ be the set of all $(\lambda, \mu)$ vectors that work, and let $\mathcal{X}$ be the set of all corresponding $\hat{x}$ vectors, so $\mathcal{X} = \{x : x = \hat{x}(\lambda,\mu) \: \mbox{for some $(\lambda,\mu) \in \mathcal{S}$}\}$. Then all local minimums $x^*$ (including the global minimum) for your problem are in the set $\mathcal{X}$. However, you are not guaranteed that all vectors in $\mathcal{X}$ are local minimums.
The case $\mu=0$ is usually easier since you can ignore the quadratic constraint, solve the corresponding linear program, and see if you can find a solution that happens to satisfy the quadratic constraint on its own.
Change of variables: If you assume $\mu \neq 0$ and define $z=1/(2\mu)$ and $r = \lambda/(2\mu)$, the stationarity equation becomes: $$ x^* = P^{-1}\left[ -c - dz - A^T r \right]$$ and we get another quadratic type of problem of finding a vector $(z,r)$ that satisfies:
\begin{align} &A P^{-1} \left[ -c - dz - A^T r \right] = b \\ &[-c^T - d^Tz - r^TA]P^{-1}\left[ -c - dz - A^T r \right] + 2[-c^T-d^Tz-r^TA]P^{-1}c + s = 0 \end{align} One advantage is that the search over vectors $(z,r)$ might be in a smaller-dimensional space than the search over all vectors $x$ in the original problem.