Minimizing the sum of linear ratios

920 Views Asked by At

Is there a way to transform the following optimization problem into an equivalent convex problem:

$ \mathrm{min}_x \, \Large \left\{ \sum\limits_{i=1}^n \frac{\langle x,a_i \rangle}{\langle x,b_i\rangle} \right\} $

with $x \in \mathcal{X}$, and $ \mathcal{X} \subset \mathbb{R}^n$ is a convex subset of $\mathbb{R}^n$, and $\langle\cdot,\cdot\rangle$ denotes the standard scalar-product in $\mathbb{R}^n$. Moreover, $a_i, b_i \in \mathbb{R}^n$ and $b_i > 0 \,\,\, \forall \, i$ (i.e. all components of the vectors $b_i$ are positive. If necessary, the problem could be reformulated such that also the vectors $a_i$ have only positive components).

The convex set $\mathcal{X}$ is defined via the inequalities

$c_1 \leq x \leq c_2, \,\,\,\,\,\, c_1,c_2 >0$

$\sum_{i=1}^n \langle x, A_i\, x \rangle \,\, \leq c_3$

with positive vectors $c_1$ and $c_2$ and a positive constant $c_3$ and symmetric, positive semi-definite matrices $A_i$. Moreover, there are further linear equality constraints of the form

$f(x) =c_4$ with a linear function $f$.

I want to transform this problem to an equivalent convex optimization problem.

I have constructed several possibilities to transform this problem to a convex one for $n=1$. In this case I can also prove several nice properties. However, for $n>1$ I was not able to transform this problem to a convex one. Does anybody know whether this is possible, and if yes how can it be done?

Thank you very much!

1

There are 1 best solutions below

6
On

General problems of the type you give are NP-hard. However, structured classes can be solved. Here is an argument that shows hardness in general:

Consider the (hard) integer program of finding a vector $x \in \mathbb{R}^n$ to satisfy the following constraints:
\begin{align} &Ax \leq b \\ &\sum_{i=1}^n x_i = k\\ &x_i \in \{0, 1\} \quad, \forall i \in \{1, ..., n\} \end{align}

Assume the integer program is feasible (so there exists an integer solution that satisfies the constraints). Let's define a new (non-integer) problem of the general type you pose, then show it could be used to solve the above (hard) problem. The new problem is:

\begin{align} \mbox{Minimize:} \quad & \sum_{i=1}^n \frac{x_i}{1+x_i} \\ \mbox{Subject to:} \quad & A x\leq b\\ &\sum_{i=1}^n x_i = k \\ &x_i \in [0,1] \quad , \forall i \in \{1, ..., n\} \end{align}

Claim: If the integer program is feasible, then the new problem is also feasible and its solution satisfies the constraints of the integer program.

Proof: Let $y^*$ be a solution to the integer problem. Then $y_i^* \in \{0,1\}$ and $\sum_{i=1}^n y_i^*=k$. Notice that the function $x/(1+x)$ satisfies $$ (Fact 1): \quad \frac{x}{1+x} \geq \frac{x}{2} \quad , \forall x \in [0,1], \mbox{with equality iff $x\in \{0,1\}$} $$ Hence $$ \sum_{i=1}^n \frac{y_i^*}{1+y_i^*} = \sum_{i=1}^n \frac{y_i^*}{2} = k/2 $$ Notice that $y^*$ satisfies the constraints of the new problem and achieves an objective value of $k/2$. So the new problem is feasible, and its optimal objective value is less than or equal to $k/2$.

Now let $x^*$ be an optimal solution to the new problem. It suffices to show $x^*$ is a binary vector. Since the optimal objective value is no more than $k/2$, we have: $$ \sum_{i=1}^n \frac{x_i^*}{1+x_i^*} \leq k/2 $$ On the other hand: $$ k/2 \geq \sum_{i=1}^n \frac{x_i^*}{1+x_i^*} \overset{(a)}{\geq} \sum_{i=1}^n \frac{x_i^*}{2} \overset{(b)}{=} k/2 $$ where (a) holds by Fact 1 and (b) holds because $x^*$ satisfies the constraint $\sum_{i=1}^n x_i^*=k$. It follows that $$ \sum_{i=1}^n\frac{x_i^*}{1+x_i^*} = \sum_{i=1}^n \frac{x_i^*}{2}$$ But for each $i \in \{1, ..., n\}$, we know by Fact 1 that $$\frac{x_i^*}{2} \geq \frac{x_i^*}{1+x_i^*}$$
So equality must hold in each term $i$, that is, $\frac{x_i^*}{2} = \frac{x_i^*}{1+x_i^*}$, which implies $x_i^* \in \{0,1\}$ by Fact 1. $\Box$


The "structured classes" of problems I mentioned above are generalizations of linear fractional problems where denominator terms can be grouped over separate variables, such as \begin{align} \mbox{Minimize:} \quad & \sum_{i=1}^G \frac{g_i(x)}{T_i(x)} \\ \mbox{Subject to:} \quad & \sum_{i=1}^G \frac{h_{i,k}(x)}{T_i(x)} \leq c_k, \quad \forall k \in \{1, ..., K\} \\ \end{align} where $G$ is the number of groups, and where for each group $i \in \{1, ..,G\}$, the functions $g_i(x)$ and $T_i(x)$, $h_{i,k}(x)$ are functions of a disjoint set of components of the $x$ vector. For example, group 1 uses only components $x_1, x_2$. Group 2 uses components $x_3, x_4$, and so on. I've done some work in more complicated stochastic scenarios that use this structure, for example Theorem 1 here (which is unfortunately written for a more complicated system, a simplified version could be written for this problem):

http://ee.usc.edu/stochastic-nets/docs/asynch-markov-v8.pdf