Convex optimization: Piece-wise, quadratic objective

392 Views Asked by At

This question is about convex optimization with a convex objective function, which is defined piecewise. We have two functions, a concave function A(x) and a strictly convex, increasing function B(x), both defined on [0, infinity) and not negative, $a, b, c, \alpha, \beta$ are constants.

The objective function of the problem is A(x) - B(x), so the optimization problem is convex and formulated like this:

$$A(x) = \begin{cases} \beta x - \frac{\alpha}{2} x^2, & 0 \leq x < \frac{\beta}{\alpha} \\ \frac{\beta^2}{2 \alpha}, & x \geq \frac{\beta}{\alpha} \end{cases}$$

$$B(x) = a x^2 + b x + c$$

$$\max_{\substack{\mathbf{x} \in \mathbb{R}}} A(x) - B(x)$$

Here, we have the plot of A (red), B (blue) and the objective function A - B (green) as an example.

Optimization problem plot

Note, that this is a simplified version of the actual problem, where we have a sum of A minus a sum of B with multiple variables, summed up as x in the above problem, but for the core issue I have, this simplified approach is essential.

My question is, how to formulate this problem for an optimizer, like JOptimizer, how to cover both cases of A? What class of problem is this, if not QP or QCQP?

What I have tried so far: The problem is convex and from my limited knowledge about optimization theory, I can only classify it as QP or QCQP among LP, QP, QCQP, SOCP, SDP, GP. I can successfully transform the problem to QP without the second case (constant case) of A, because then, the problem is quadratic but not solvable in general (only for the defined range of the first case of course). I have researched a lot about this, but I have not found a single source on how to classify the problem or transform it in case there is a function with two cases as A. The only thing I found was for linear programs with multiple linear functions, by introducing an auxiliary variable to maximize, but that does not work in this case, because the second case of A is always the maximum and the first is always the minimum.

1

There are 1 best solutions below

0
On BEST ANSWER

I'm not sure if this is going to generalize to your more complex model, but the scalar case isn't that difficult here. Let's define $\bar{A}(z) = \beta z - \alpha z^2 / 2$, the first "mode" if $A$. Then

$$A(x) = \max\{\bar{A}(z) \,|\, z\leq x, ~z \leq \beta/\alpha\}$$

So your original problem transforms from this: $$\begin{array} \text{maximize}_x & A(x) - B(x) \\ \text{subject to} & x \geq 0 \end{array}$$ to this: $$\begin{array} \text{maximize}_{x,z} & \bar{A}(z) - B(x) \\ \text{subject to} & x \geq 0 \\ & x \geq z \\ & z \leq \beta/\alpha \end{array}$$ Now you have a garden variety QP.