I have an optimization problem that I'm trying to define as a linear program. I ran into the problem that I wanted to have an objective function of the form:
$maximize:\space f(x_1)$
$f(x_1) = a_1 x_1 - a_2 \lvert x_1\rvert$
$subject \space to:\space <constraints>$
I tried doing the classic linear reformulation using a replacement variable $x_1'$ and adding the constraints:
$x_1 - x'_1 \le 0$
$-x_1 - x'_1 \le 0$
The problem that I ran into is that the solutions seem to be constraining $x_1$ to positive values, even when they obviously should be negative. Looking at the docs I have on this, it seems that this is because the assumption is that all values of it will be maximized or minimized in the same direction and if you break this assumption then you're forced to utilize the M/ILP method with binary determiners in order to solve this.
I haven't found any discussion on the limitation of using the variables $x_1$ and $x'_1$ in the objective function. Is it possible to do what I want without resorting to ILP?
Splitting the variable into positive and negative components seems to have worked for us. I'll include my solution, though there are likely limitations to the scope of the solution.
Basically, for each variable $x_1$, we split the variable into two: $x_{1+}$ and $x_{1-}$. We then added the constraints:
$-x_{1+}\le 0$
and
$x_{1-}\le 0$
We were then able to apply our value function for coefficients to each part individually. For instance,
$f(x_{1+},x_{1-}) = a_1 x_{1+} - a_1 x_{1-}$
With the final solution for the variable $x_1$ being:
$x_1 = x_{1+} + x_{1-}$
After we attempted this and proved it out, we found discussion on the method here.