I have a problem in the following setting:
- We maximize a concave quadratic objective function over a polytope, so we have a QP. All the variables are non-negative and are required to sum up to one.
- If we optimize this problem, we know that the optimal solution $x^*$ fulfills $x^*_i < a$ for some special index $i$ and some given positive value $a \in (0,1)$.
- Now we add the constraint $x_i \geq a$ to the problem. This cuts off the previous optimal solution and the objective function value decreases.
The question is: By which amount $\Delta$ do we have to increase the coefficient of $x_i$ in the linear part of the objective function (without touching the quadratic part) such that an optimal solution to the problem without the additional constraint fulfills $x_i \geq a$ ? Like how valuable must $x_i$ become such that the constraint is no longer necessary.
Of course, for some sufficiently large value of the coefficient, the variable will dominate everything and we would assign $x_i = 1$. I also think that $x_i$ is monotonous in $\Delta$, so we could search for the correct value of $\Delta$ using binary search. However, I'm more interested in a direct method that is, e.g., based on another optimization problem.
This somehow reminds me of both sensitivity analysis and inverse optimization, but I didn't see such a problem in literature before.