How to minimize a sum of absolute values using linear programming?

10.1k Views Asked by At

I am having trouble understanding the logic behind optimization of cost function of the form $$\min (|x| + |y| + |z|) \,$$ subject to constraints $$Ax \le b \qquad Cx = d $$ such as $$ x + y \le 1 \qquad 2x + z = 3.$$

I have seen methods involving representing absolute values as a new variable and putting constraints on them, $i.e$ $$|x| = a \qquad -a \le x \le a$$ but I don't understand why should we represent an equality as inequality. Aren't we changing the equation itself?

I have tried to think but am unable to grasp it. Please help.

The method is available on Wikipedia as a numerical example:- https://optimization.mccormick.northwestern.edu/index.php/Optimization_with_absolute_values

EDIT:- The relevant example

2

There are 2 best solutions below

7
On BEST ANSWER

On that page, they are solving another problem. They are trying to make the constraint |x| < b, so it's true that they can split that inequation in those two. In your case, you can't, because you need it to be either x, or -x, not any value in between.

What you can do, is using a bivalent variable. Those are also called logical variables.

With them, you can define another variable, like u, and restrain them like this.

Let's say V is a bivalent variable. And M a big number.

x - MV ≤ u ≤ x + MV

-x - M*(1-V) ≤ u ≤ -x + M*(1-V)

This way, when V is 0, the second constraint does nothing. And the first one forces u to be x.

When V is 1, the first constraint does nothing. And the second one forces u to be -x.

Then, when the problem is solved, u can only take one of those values.

0
On

From section 6.1.1 of Boyd & Vandenberghe:

Sum of absolute residuals approximation

When the $\ell_1$-norm is used, the norm approximation problem $$ \text{minimize} \quad \| A x - b \|_1 = | r_1 | + \dots + | r_m | $$ is called the sum of (absolute) residuals approximation problem, or, in the context of estimation, a robust estimator (for reasons that will be clear soon). Like the Chebyshev approximation problem, the $\ell_1$-norm approximation problem can be cast as an LP $$ \begin{array}{ll} \text{minimize} & {\bf 1}^T t \\ \text{subject to} & -t \preceq A x - b \preceq t, \end{array} $$ with variables $x \in {\bf R}^n$ and $t \in {\bf R}^m$.