Consider the problem to minimize $f(x)$ under the constraints $x \leq b$ and $x \in X$.
I Lagrange relax the constraint $x \leq b$ getting
$L(x,u) = f(x) + u^t(x-b)$.
When using the subgradient method, we set $u^0$ to some arbitrary value and treat it as fixed in each iteration, find $q(u^k) = \inf_{x\in X}L(x,u^k)$ and then take a step in a subgradient direction.
Suppose calculating $q(u^k) = \inf_{x\in X}L(x,u^k)$ is still hard in some sense. Can we solve an approximative problem instead? Will it still converge to a dual optimum? Under what conditions? Is there some way to bound the error?
Answers or references both appreciated.
In general, you need to be able to solve the subproblem to $\epsilon$ optimality in order for approximate subgradients to converge. In the specific case the poster (me) was interested in, it would not have been possible. For convergence, you need to be able to guarantee the iterations brings you geometrically closer to the optimal solution set. There is no way to verify this is the case here since I wanted to approximately solve an NP-problem in each iteration.