A Bi-Level Optimization Problem

64 Views Asked by At

I have a problem in which I need to maximize a function over a vector of $X:\mathbb{R}^{N}_{++},Y:\mathbb{R}^{N}_{++}$. The maximization problem looks like this,

$$ \max_{X} \left( \sum_{n=1}^{N} f_{n}(x_{n}) - \min_{Y} g(Y;X) \right) $$

where for each $n$, $f_{n}(x_{n})$ is continuous, differentiable, increasing and strictly concave and $g$ is a convex, differentiable, continuous function where X is a parameter on the constraint (which define a convex set) of the minimization of $g(\cdot)$. Everything is well behaved. Therefore, the optimal values of $Y$ will be a function of $X$, $Y(X)$. In other words, the inner problem is such that

$$ \min_{Y} g(Y) $$ $$ \text{s.t} \hspace{5pt} h(Y,X)\leq 0 $$

Everything is well behaved and therefore envelope conditions apply. The only issue is that I cannot compute the minimizer $Y(X)$ in a closed form, I need to use numerical optimization for solving the inner problem. Also, I do not have a closed form for the Lagrange multipliers of the constraints of the inner nest.


One potential solution, is to guess a vector of X's, solve for the inner minimization problem and compute the Lagrange multipliers, invoke the Envelope Condition to get a value of the derivative with respect to X in the inner problem and solve for the outermost problem using first order conditions. This process yields a new vector of X, and iterate until convergence.

However, this is too slow.


Is there any strategy to solve this on one step? Any properties of optimization that one can appeal to?

What is the challenge? If I were able to get the value of the multipliers for any value of X in the inner problem, then solving the problem would be straightforward. I guess that the issue is that to compute those multipliers I need to fix a vector of X and then these are functions of X.

Any ideas?