Tolerance in x in Linear Programming

186 Views Asked by At

Suppose we have a linear programming problem of the form:

\begin{align*} & \min_{x}c^{T}x\\ \text{s.t.}\text{: } & Ax=b\text{, }x\geq0 \end{align*}

Generally, the termination condition of the optimization algorithms is based on setting a maximum tolerance $\epsilon>0$ for the approximation of the objective function. Formally the stopping condition of the algorithm is: if $x^{\ast}$ is the point for which the objective function $f\left( x\right) =c^{T}x$ is minimum, then in the k-th step $x_{k}$ is such that $\left\vert c^{T}\left( x_{k}-x^{\ast}\right) \right\vert <\epsilon$. If we want $\left\Vert x_{k}-x^{\ast}\right\Vert <\delta$ for an arbitrary $\delta>0$, how should we select $\epsilon$ in the termination condition?

1

There are 1 best solutions below

1
On

In my experience, optimization solvers (for LP, but also for NLP) attempt to solve the optimality conditions, therefore terminate when the optimality conditions are numerically satisfied.

The simplex method, for example, maintains primal feasibility throughout and terminates when the current iterate is also dual feasible.

Your suggestion is based on the prior knowledge of the solution ; it is in practice rarely the case (except e.g. for benchmarking methods).