Solve $\max_{x_1,x_2,x_3} \{ \alpha \min \{a x_1,b x_2,c x_3\}\}$ s.t. $p_1 x_1 + p_2 x_2 + p_3 x_3 = w$

534 Views Asked by At

Consider the objective function \begin{equation*} f(x_1,x_2,x_3)=\alpha \min \{a x_1,b x_2,c x_3\} \end{equation*} where $\alpha, a, b, c \in \mathbb{R}$ are arbitrary constants.

We wish to maximize $f$, i.e. solve \begin{equation*} \max_{\{(x_1,x_2,x_3) \in \mathbb{R}^3\}} \{\alpha \min \{a x_1,b x_2,c x_3\} \} \end{equation*} subject to the constraint \begin{equation*} p_1 x_1 +p_2 x_2 + p_3 x_3 = w \end{equation*} where $p_1,p_2,p_3,w \in \mathbb{R}$ are arbitrary constants.

(To visualize this problem, we wish to find the vector $\mathbf{x} \in \mathbb{R}^3$ such that the plane $p_1 x_1 + p_2 x_2 + p_3 x_3 = w$ intersects the 3-dimensional "box" $f(x_1,x_2,x_3)=\alpha \min \{a x_1,b x_2, c x_3\}$ at its lower "corner".)


Remark

To solve an constrained optimization problem, you would normally write a Lagrangian to turn the constrained maximization problem into an unconstrained maximization problem: \begin{align*} \mathcal{L}(x_1,x_2,x_3,\lambda) &=f(x_1,x_2,x_3)+\lambda(w - p_1 x_1 - p_2 x_2 - p_3 x_3) \\ & \\ &=\alpha \min \{a x_1,b x_2,c x_3\} - \lambda (p_1 x_1 + p_2 x_2 + p_3 x_3 - w) \end{align*} whose solutions are given by the first order conditions \begin{equation*} \frac{\partial f(x_1,x_2,x_3)}{\partial x_i}=\lambda p_i \end{equation*} The problem is, in this particular example, our objective function isn't everywhere differentiable (since $\min$ has a "kink"). As this answer points out, the above partial derivatives of our objective function with respect to $x_i$ will only exist on the "sides" of this 3-dimensional box. So how, mathematically, can one take the first order conditions to solve this problem?

1

There are 1 best solutions below

7
On

The first order conditions require differentiability so you can't use those. What you really need is the generalized KKT conditions, that deal with sub-differentiability. Basically, rather than the gradient of the Lagrangian to be zero at $x^*$, you need the zero vector to be a subgradient at $x^*$. This is sufficient since your problem is still concave, despite it being nondifferentiable.

Note: this assumes $\alpha \geq 0$. But in that case, you can just drop $\alpha$.

Note2: problems like this are normally solved using linear programming, not through optimality condition algebra.

Note3: To convert your program into a linear program, the following problem will model the $\min$ in your objective function:

$\max z$ s.t. $z \leq ax_1, z \leq bx_2, z \leq cx_3, p^tx = w$

To clarify: You want to maximize $\min\{a,b,c\}$. You could see it as wanting to find THREE functions of which the smallest one is as large as possible. Let's denote $z := \min\{a,b,c\}$.

What do we know of $z$? It can't be larger than any of $a,b,c$. It is easy to see that this is a necessary condition to model your problem - what might be less obvious is that it is also a sufficient condition.

This is due to us trying to MAXIMIZE $z$ - if we can choose ANY feasible $z$ that is not greater than any of $a,b,c$, then the largest we ever can make $z$ is bounded by the smallest of $a,b$ and $c$. So

$$\text{maximize}\ z = \min\{a,b,c\} \\ @ \ p^tx = w$$

is equivalent to

$$\text{maximize}\ z = \min\{a,b,c\} \\ @ \ p^tx = w \\ z \leq a, z \leq b, z\leq c$$

But as I explained, defining $z$ as the $\min$ of a bunch of terms is redundant. We can just use $z$ as a decision variable instead:

$$\text{maximize}\ z \\ @ \ p^tx = w \\ z \leq a, z \leq b, z\leq c.$$

You are correct that there will be three additional Lagrange multipliers (but for inequality constraints) and one additional decision variable. Letting $z := x_4$ is completely fine here - there's nothing special about $z$.