I am dealing with an optimization problem where I have to maximize a sum of posynomials subject to affine constraints. The formulation of the problem (P) is as follows:
$$\text{maximize } f(\boldsymbol{x}) = \sum_{j=1}^m c_j \prod_{i=1}^{n} x_i^{a_{i,j}}\\ \text{subject to } h(\boldsymbol{x}) = \sum_{i=1}^n b_ix_i + C = 0 \text{ and } x_i \le 1, i \in \{1,...,n\}\\ \text{where } \boldsymbol{x} = (x_1, ..., x_n) \in \mathbb{R}_{++}^n \text{ is the variable;}\\ \text{and } (c_1,...c_m) \in \mathbb{R}_{++}^m\\ \text{and } (b_1,...,b_n) \in \mathbb{R}_{+}^n\\ \text{and } a_{ij} \in \mathbb{N}, \forall (i,j) \in [1..n]^2$$
Since the coefficients $c_i$ are strictly positive, this would conveniently be a geometric program (GP) if it was about minimizing $f(\boldsymbol{x})$, since $f$ can be made convex after a logarithmic change of variable.
Note that maximizing $f$ is equivalent to minimizing $$-f(\boldsymbol{x}) = \sum_{j=1}^m -c_j \prod_{i=1}^{n} x_i^{a_{i,j}}$$ where $-c_j < 0$ - that is, $-f$ is a signomial and (P) becomes a signomial optimization problem (SP), which is much harder to solve than a (GP).
Even though there is no known convexification of general (SP) problems, I was wondering if, in the specific scope of (P), and especially the fact that $f$ is directly the opposite of a posynomial, a methodology existed to avoid having to solve an (SP) problem?
Note that if $m=1$, then $f$ becomes a monomial and therefore we have a (GP) problem with the minimization of $1/f$, which is also a monomial.