linearization of minimum function

367 Views Asked by At

I need to be advised in linear programming design. I have spent on this issue some time, I have reached my goal partially, but I am blocked by my lack of knowledge in some areas to go further. Namely, I need to optimize linear function:

$f({x}_1, {x}_2,...,{x}_n) = {c}_1{x}_1+{c}_2{x}_2+...+{c}_n{x}_n$, where ${x}_1, {x}_2,...,x_n \in [0,1]$.

Like you can see, it is ordinary linear function. Not trivial requirement is that, some of optimal solutions are more preferred than another one. It means, if result of optimization process is:

${x}_1 = 0, {x}_2 = 0.2, {x}_3=0.8, {x}_4 = 0.1$,

such solution is equivalent to more preferred solution:

${x}_1 = 1, {x}_2 = 0.1, {x}_3=0.0, {x}_4 = 0.0$.

It means, I prefer solutions as following.

Let's denote $\\$ ${s} = \sum_1^n{x}_i, s \in [0, s]$, (sum of all ${x}_i$) and $r= mod \ {s}$, (fraction part of $s$) then: $\Biggl\{{{x}_i=1, i\leq k\\{x}_i=r, {i=\lfloor s \rfloor+1}\\{x}_i=0, otherwise}$

Examples:

If $s=2.1$ then ${x}_1=1, {x}_2=1, {x}_3=0.1$ and ${x}_4,...,{x}_n=0$, if $s=3.8$ then ${x}_1=1, {x}_2=1, {x}_3=1, {x}_4=0.8$ and ${x}_5,...,{x}_n=0\\$

I have figured out that such solutions can be forced by involving constraint using minimum function: ${x}_i=min(1, s-\sum_1^{j=i-1}{x}_j)$ (minimum of 1 and sum of all x'es before ${x}_i$) but such constraint is unfortunately not linear. I was trying of trick: ${x}_i=min(1, s-\sum_1^{j=i-1}{x}_j) = \frac12(1+s-\sum_1^{j=i-1}{x}_j)-\frac12|1-s-\sum_1^{j=i-1}{x}_j|$ but modulus function produces disjoint intervals of feasible solutions. Maybe I am not seeing something, but is seems I need help.

ps. I have to use "pure" simplex solver therefore all what I'am using, must be linear and continous or "tricked".

Can anyone help with it?

1

There are 1 best solutions below

3
On

As @LinAlg says, optimizing over solutions of the "preferred" is basically the same as optimizing in a single variable. So you just need to find optimal solution and then compare it to the best preferred solution. This is true regardless of what your objective function is.

That said, with a linear objective function, you can easily compute the best "preferred" solution since, $$ f(1,\ldots, 1, r, 0, \ldots, 0) = rc_k + \sum_{i=1}^{k} c_i $$

Depending on if you are maximizing or minimizing, and if $c_k$ is positive or negative it is best to just pick $r=0$ or $r=1$ (unless $c_k=0$ in which case you can pick anything).

More generally, since you're optimizing a linear function over a polygon you will always be able to find a solution on one of the vertices. In your case the polygon is just a cube, which makes finding vertices easy. This means you can easily check the signs of $c_i$ to determine an optimal solution, and then see if one of the preferred form exists or not.