Lagrangian multipliers for sensitivity analysis in integer linear programming?

875 Views Asked by At

I am a knuckle draggging engineer by training, but find myself possibly contending with how best to conduct sensitivity analysis (SA) for an integer-linear programming (ILP) problem. Being new to both, I am relying a lot on Matlab's tools, which allow me to get away with knowing the tricks of converting real-world constraints into equality and inequality constraints, then letting the tools do grinding.

In one of tutorial videos, passing mention is made at the end of using Lagrangian multipliers for SA. While the video focuses on linear problems, however, it also ensures that the viewer is aware that it is also for nonlinear problems. Hence, I'm not sure if the comment about using Lagrangian multipliers applies to ILPs.

My websearching unearthed intuitively satisifying background on Lagrangian multipliers in constrained optimization, but I'm still unclear on whether it is used in the SA of ILPs.

Before I embark on a possibly longer and meanderging journey of getting the knowledge to answering this question, can anyone comment on whether Lagrange multipliers are used in the sensitivity analysis of ILPs?

Thanks.

2

There are 2 best solutions below

2
On BEST ANSWER

Although I agree with what Larry wrote, I'm not sure that Lagrangian multipliers are actually used for sensitivity analysis of IPs and MIPs in practice. One typically wants to look at the product of a multiplier with the change to the right hand side (RHS) of a constraint, and that's seldom (if ever?) meaningful in an IP.

To see why, consider the trivial problem of maximizing $2x+3y$ subject to the constraint $x + y \le 2$, with $x$ and $y$ both binary variables. The feasible region consists of the four corners of the unit square, the optimal solution is $(x,y)=(1,1)$ with objective value $5$, and the lone constraint is tangent to the feasible region at the optimum.

If we increase the RHS ($2$) of the constraint by any amount, the feasible region and optimal solution are unchanged, so the marginal benefit is $0$. If we decrease the RHS from $2$ to anything greater than or equal to $1$, the optimum jumps to $(x,y)=(0,1)$ with value $3$.

So small (or even large) increases in the RHS do nothing, while small decreases in the RHS cause a step change in the objective value. Assuming your Lagrange multiplier ends up positive, multiplying it by a positive change to the RHS will overestimate (possibly badly) the change (none) in the objective; multiplying it by a small negative change to the RHS will likely underestimate the (negative) change to the objective value, again possibly badly.

2
On

The short answer is yes, Lagrange multipliers can be used for sensitivity analysis of ILPs.

But there are a few caveats. One is that finding the optimal Lagrange multipliers for an ILP is not as straightforward as it is for an LP. For an LP, you run the simplex method and it automatically generates dual values -- done.

But in ILP, you usually need to solve another optimization problem to find the optimal multipliers. Typically the form is something like

$$\max_\lambda \min_{x} f(x) + \lambda(b-Ax),$$ where $\lambda$ is the vector of Lagrange multipliers, $x$ are the primal variables (i.e., the variables in your original problem), $f(x)$ is the objective function for the original problem, and $Ax \le b$ are the constraints you relax from the original problem. I'm assuming your problem is a minimization problem, but just swap the min and max if not.

The outer objective function is piecewise linear and concave, and the outer optimization problem is usually performed with subgradient optimization or some other nonlinear optimization algorithm. At each iteration of the outer problem, i.e., for each candidate $\lambda$, we solve the inner optimization problem. This also means you need a very efficient way to solve the inner problem.

I really like this paper as an overview of Lagrangian relaxation for ILP.

Another thing to keep in mind is that, even for continuous problems, Lagrange multipliers can only tell you so much, in terms of sensitivity analysis. They tell you a direction of change in the objective function with respect to a change in the right-hand side of a constraint, and some sense of the magnitude of change, but only if the change in the RHS is relatively small.

Many people use "sensitivity analysis" to mean something a bit less particular -- you want to try a bunch of different values for the parameters and see how the solutions change. If that's the kind of approach you have in mind, the whole thing might be a lot simpler if you just re-run the model a bunch of times, for a bunch of different parameters, rather than trying to get a theoretical SA model using multipliers.