Dual subgradient method - can we solve approximation of dual?

147 Views Asked by At

Consider the problem to minimize $f(x)$ under the constraints $x \leq b$ and $x \in X$.

I Lagrange relax the constraint $x \leq b$ getting

$L(x,u) = f(x) + u^t(x-b)$.

When using the subgradient method, we set $u^0$ to some arbitrary value and treat it as fixed in each iteration, find $q(u^k) = \inf_{x\in X}L(x,u^k)$ and then take a step in a subgradient direction.

Suppose calculating $q(u^k) = \inf_{x\in X}L(x,u^k)$ is still hard in some sense. Can we solve an approximative problem instead? Will it still converge to a dual optimum? Under what conditions? Is there some way to bound the error?

Answers or references both appreciated.

2

There are 2 best solutions below

0
On BEST ANSWER

In general, you need to be able to solve the subproblem to $\epsilon$ optimality in order for approximate subgradients to converge. In the specific case the poster (me) was interested in, it would not have been possible. For convergence, you need to be able to guarantee the iterations brings you geometrically closer to the optimal solution set. There is no way to verify this is the case here since I wanted to approximately solve an NP-problem in each iteration.

2
On

In general, this is a very difficult problem.

If X is a conic constraint, potential reduction methods come closest to what you are saying. That is not an approximate problem but figures out a candidate function to minimize.

If X is some complicated inequality set involving several funky functions , 9/10 chances are it cannot be done EVEN IF IT IS CONVEX unless you are able to devise some nice embedding (based on my experience at least)