Difficult Lagrange multipliers problem

57 Views Asked by At

Let us consider the following convex optimization problem: \begin{equation} \begin{aligned} \max_{\mathbf{x}\in \mathbb{R}^{n}} \quad & \mathbf{c}^\top \mathbf{x} \\ \textrm{subject to} \quad & \exp(\mathbf{a}^\top \mathbf{x}) + \exp(\mathbf{b}^\top \mathbf{x}) - \mathbf{d}^\top \mathbf{x} + \mathbf{x}^\top \mathbf{A} \mathbf{x} \leq \Gamma, \end{aligned} \end{equation} where $\mathbf{a}, \mathbf{b}, \mathbf{c}, \mathbf{d} \in \mathbb{R}^n$ are real vectors, $\mathbf{c}$ is not the null vector $\mathbf{0}_n$, $\mathbf{A}$ is a positive definite matrix and $\Gamma$ is a positive constant. The Lagrangian function $\mathcal{L}$ is given by \begin{equation*} \mathcal{L}(\mathbf{x}, \mu) = \mathbf{c}^\top \mathbf{x} - \mu [\exp(\mathbf{a}^\top \mathbf{x}) + \exp(\mathbf{b}^\top \mathbf{x}) - \mathbf{d}^\top \mathbf{x} + \mathbf{x}^\top \mathbf{A} \mathbf{x} - \Gamma], \end{equation*} where $\mu \geq 0$ is a Lagrange multiplier. Since the Lagrangian is differentiable, the Karush-Kuhn-Tucker (KKT) conditions hold and are stated as \begin{align} \mathbf{c} - \mu[\mathbf{a}\exp(\mathbf{a}^\top \mathbf{x}) + \mathbf{b}\exp(\mathbf{b}^\top \mathbf{x}) - \mathbf{d} + 2\mathbf{A} \mathbf{x}] & = \mathbf{0}_{n},\\ \mu [\exp(\mathbf{a}^\top \mathbf{x}) + \exp(\mathbf{b}^\top \mathbf{x}) - \mathbf{d}^\top \mathbf{x} + \mathbf{x}^\top \mathbf{A} \mathbf{x} - \Gamma] & = 0, \\ \exp(\mathbf{a}^\top \mathbf{x}) + \exp(\mathbf{b}^\top \mathbf{x}) - \mathbf{d}^\top \mathbf{x} + \mathbf{x}^\top \mathbf{A} \mathbf{x} \leq \Gamma \end{align} If the last inequality is strict, then the second condition implies that $\mu = 0$. However, this leads to a contradiction in the first equation, since $\mathbf{c} \neq \mathbf{0}_{n}$. Therefore, $\mu > 0$ and we first equations reads \begin{equation*} \mathbf{a}\exp(\mathbf{a}^\top \mathbf{x}) + \mathbf{b}\exp(\mathbf{b}^\top \mathbf{x}) + 2\mathbf{A} \mathbf{x} = \frac{\mathbf{c}}{\mu} + \mathbf{d} \end{equation*} If an explicit solution for $\mathbf{x}$ is found here, it can be plugged in \begin{equation*} \exp(\mathbf{a}^\top \mathbf{x}) + \exp(\mathbf{b}^\top \mathbf{x}) - \mathbf{d}^\top \mathbf{x} + \mathbf{x}^\top \mathbf{A} \mathbf{x} = \Gamma, \end{equation*} thus yielding the value of the Lagrange multiplier. However, since no closed-form solution exists for this problem, I am not sure how to continue solving this problem. Any help will be appreciated.

1

There are 1 best solutions below

2
On

This optimization problem can be numerically solved with a convex conic solver, as cam be conveniently invoked from a convex optimization tool, such as CVX, CVXPY, Convex.jl, CVXR, or YALMIP (which also does non-convex and non-conic modeling).

The CVX (under MATLAB) code is:

cvx_begin
variable x(x)
maximize(c'*x)
exp(a'*x) + exp(b'*x) - d'*x + x'*A*x <= Capital_Gamma
cvx_end

Mosek is the best solver to specify to be invoked for this problem.