Related to Sequential Quadratic programming and about optimization in general?

28 Views Asked by At

I have an maximization optimization problem where my objective function is linear and almost all of the constraints result in convex sets. However there are two constraints which I am sure about. But their form is as follows $$g_1(x)=g_{1,1}(x)g_{1,2}(x)\leq c$$ where $c$ is some positive constant and $x\geq 0$. I know that $g_{1,1}(x)$ is convex but I am not sure about $g_{1,2}(x)$ (I think we can not tell for sure whether $g_{1,2}(x)$ is convex or concave) so in this case is it right or wrong to use the first order Taylor expansion of $g_{1,1}(x)g_{1,2}(x)$ instead of $g_{1,1}(x)g_{1,2}(x)$ in the constraint. I know that in sequential quadratic programming (SQP) this kind of operation is done. But I think in SQP all the constraints are linearized and the objective function is represented by its quqadratic form representation. In my case I want to convert only the constraints which do not result in convex sets. So is my approach to linearize only few constraints wrong or right? I will be very thankful to you for your comments. Thanks in advance.