An Interesting Resource Allocation Problem

320 Views Asked by At

Here is the problem: \begin{array}{ll} \text{minimize} & \sum_{i=1}^N \frac{1}{1 + \textrm{exp}(C_i + x_i)}\\ \text{subject to} & \sum_{i=1}^N x_i \le R \\ & x_i \ge 0, ~ i = 1,2,...,N \end{array}

$C_1,C_2,...,C_N$ are real numbers and can be positives, zeros and negatives.The problem seems to be easy, but after several days I still have no idea on solving it. Thanks for any comments.

1

There are 1 best solutions below

17
On BEST ANSWER

I found this problem interesting to consider! What application does it come from? It is not convex unless $\min_i C_i\geq 0$; and since you've explicitly said that $C_i$ can be negative, it cannot be treated as convex. Furthermore, the problem is infeasible if $R<0$ and trivial if $R=0$, so let us assume that $R>0$.

Because the constraints are linear and the objective is smooth, the KKT conditions are necessary conditions that any optimum must satsify. These conditions are built around the Lagrangian \begin{aligned} L(x,\lambda,\nu) &= \sum_i \frac{1}{1+\exp(C_i+x_i)} - \sum_i \lambda_i x_i - \nu ( R - \sum_i x_i ) \\ &= -R\nu + \sum_i \frac{1}{1+\exp(C_i+x_i)} + (\nu-\lambda_i) x_i \end{aligned} where $\lambda_i\geq 0$, $i=1,2,\dots,n$ are the Lagrange multipliers for $x_i\geq 0$, and $\nu \geq 0$ is the Lagrange multiplier for $\sum_i x_i \leq R$. The conditions are:

  • Stationarity: $$\frac{\exp(C_i+x_i)}{(1+\exp(C_i+x_i))^2}=\nu-\lambda_i, \quad i=1,2,\dots, n.$$
  • Primal and dual feasibility: $$\sum_i x_i \leq R \quad \nu\geq 0 \quad x_i,\lambda_i\geq 0, ~ i=1,2,\dots, n$$
  • Complementary slackness: $$\nu \left( R- \sum_i x_i \right) = 0 \quad\Longrightarrow\quad \nu =0 ~~\text{or}~~\sum_i x_i=R$$ $$\lambda_i x_i = 0 \quad\Longrightarrow\quad x_i=0~~\text{or}~~\lambda_i=0, \quad i=1,2,\dots, n.$$

The complementary slackness condition on $\nu$ is not that helpful, because it must be true that $\sum_i x_i = R$ at the optimum. After all, if $\sum_i x_i < R$, then we can increase any $x_i$ until equality is achieved, and decrease the objective when doing so.

The complementary slackness condition on $\lambda$ is much more interesting. Note the implication: if $x_i>0$, then $\lambda_i=0$, which means $$x_i > 0 \quad\Longrightarrow\quad \frac{\exp(C_i+x_i)}{(1+\exp(C_i+x_i))^2}=\nu$$ In other words, all of the nonzero values of $x_i$ must obtain the same value for that left-hand quantity. Solving for $\exp(C_i+x_i)$ we have $$\exp(C_i+x_i) = -1 + \frac{1}{2\nu} ( 1 \pm \sqrt{1-2\nu} )$$ We already have $\nu\geq 0$, and in order for the right-hand side to be real, we must also have $\nu\leq 1/2$. Inspection tells us that for $\nu\in(0,1/2)$, only one of the roots is positive; and when $\nu=1/2$, both are zero. Therefore, we do have $$x_i > 0 \quad\Longrightarrow\quad x_i = \log\left(-1 + \frac{1}{2\nu}( 1 + \sqrt{1-2\nu} )\right)-C_i \quad\nu\in(0,1/2)$$ The messy right-hand quantity is irrelevant: we now know that all nonzero values of $x_i$ must obtain the same value of $C_i+x_i$.

This leads us to what is called a water-filling algorithm. The idea is this: we start with the smallest value of $C_i$, and start "adding water" $C_i+x_i$ until it equals the next smallest value, say $C_j$. We then add to both $C_i+x_i=C_j+x_j$ until they equal the third smallest value. The process continues until we reach $\sum_i x_i = R$.

EDIT: It is clear that that one part of this where I do handwave, describing the water filling algorithm, is not correct. But note that Bob's example below does satisfy the optimality conditions above. I'll be thinking about this and edit further as necessary.