I am a knuckle draggging engineer by training, but find myself possibly contending with how best to conduct sensitivity analysis (SA) for an integer-linear programming (ILP) problem. Being new to both, I am relying a lot on Matlab's tools, which allow me to get away with knowing the tricks of converting real-world constraints into equality and inequality constraints, then letting the tools do grinding.
In one of tutorial videos, passing mention is made at the end of using Lagrangian multipliers for SA. While the video focuses on linear problems, however, it also ensures that the viewer is aware that it is also for nonlinear problems. Hence, I'm not sure if the comment about using Lagrangian multipliers applies to ILPs.
My websearching unearthed intuitively satisifying background on Lagrangian multipliers in constrained optimization, but I'm still unclear on whether it is used in the SA of ILPs.
- http://www.quora.com/What-is-a-Lagrange-multiplier
- http://www.youtube.com/watch?v=bsWFO-ERAnI
- http://www.youtube.com/watch?v=SQgsD2mPcYE
Before I embark on a possibly longer and meanderging journey of getting the knowledge to answering this question, can anyone comment on whether Lagrange multipliers are used in the sensitivity analysis of ILPs?
Thanks.
Although I agree with what Larry wrote, I'm not sure that Lagrangian multipliers are actually used for sensitivity analysis of IPs and MIPs in practice. One typically wants to look at the product of a multiplier with the change to the right hand side (RHS) of a constraint, and that's seldom (if ever?) meaningful in an IP.
To see why, consider the trivial problem of maximizing $2x+3y$ subject to the constraint $x + y \le 2$, with $x$ and $y$ both binary variables. The feasible region consists of the four corners of the unit square, the optimal solution is $(x,y)=(1,1)$ with objective value $5$, and the lone constraint is tangent to the feasible region at the optimum.
If we increase the RHS ($2$) of the constraint by any amount, the feasible region and optimal solution are unchanged, so the marginal benefit is $0$. If we decrease the RHS from $2$ to anything greater than or equal to $1$, the optimum jumps to $(x,y)=(0,1)$ with value $3$.
So small (or even large) increases in the RHS do nothing, while small decreases in the RHS cause a step change in the objective value. Assuming your Lagrange multiplier ends up positive, multiplying it by a positive change to the RHS will overestimate (possibly badly) the change (none) in the objective; multiplying it by a small negative change to the RHS will likely underestimate the (negative) change to the objective value, again possibly badly.