I am working on a research project with an optimization problem of the form
$$\begin{array}{ll} \text{minimize} & c^T x\\ \text{subject to} & A x + b = 0\\ & x_i \le x_j x_k\\ & x \in \mathbb R_+^n\end{array}$$
where $x_i$, $x_j$ and $x_k$ are different elements of the vector $x$. My questions are the following:
Can I reformulate this problem in some way to make it convex?
If not, what would be the most efficient way (algorithm or method) to find a global optimum?
I am mostly curious about this because when I solve the problem with powerful NLP solvers (e.g. CONOPT, MINOS) I always get the same general result (independent of initial guesses and problem parameters). Therefore my guess is that there are not many "bumps" on the way for GRG based solvers, but I haven't been able to express that in any meaningful (mathematical) form.
Now I have established that this is not a convex problem in this form. I have also tried, unsuccessfully, to show this is a quasiconvex problem, which would have been the case if the inequality constraint was of the form $\alpha \le x_j x_k$ where $\alpha$ is some scalar (see similar problem in Boyd example 3.31). On this website I found some clever tricks that transform similar constraints by taking the square root on both sides (geometric mean is concave on $R^n_+$). This would transform the inequality constraint for my problem to $g(x)=\sqrt{x_i} - \sqrt{x_j x_k} \le 0$. Now $g(x)$ is a mixture of a concave and convex function and that is as close as I have been able to get.
The bottom line is that I haven't been able to transform this into a convex or quasiconvex problem, and as a last resort I am posting the problem here in hope of some satisfying answer...
If you only have a single constraint of the form $x_i \leq x_j x_k$, you could use some of the transformations mentioned (either by you, or in the comments) and reformulate the problem as a convex program with a single reverse convex constraint. A tailored global optimization algorithm for this class of problems has been investigated here.
Edit: The above method works theoretically even for multiple reverse convex constraints. It also occurred to me that you could use so-called reduced-space branch-and-bound methods for global optimization that exploit the structure of your problem (since the number of variables participating in a nonconvex fashion in your formulation are relatively small and/or most of your constraints are simple equality constraints), see the works of Epperly and Pistikopoulos, Dur and Horst, and Stuber et al.