I’ve seen many papers (for example, this one on sequential convex programming) where non-convex problems are solved with convex optimization methods by first linearizing the problem at iteration $k$ about the solution given at iteration $k-1$, and implementing a trust region constraint where $|x^{k}-x^{k-1}|\leq\delta$ in order to make convergence to a solution more robust. However, I recently tried to use a trust region constraint in a problem that used a sequential convex programming approach, and found that without the trust region constraint it converged to a stable solution, but with a trust region constraint, the problem blew up, giving me NaN values (using MATLAB) after the first iteration.
I used the following additional constraint in my problem:
abs(xk - xkm1) <= delta
where xk is the state vector of the current iteration, xkm1 is the state vector of the previous iteration, and delta is the maximum allowable change in states between iterations $k$ and $k-1$.
I thought that trust regions were supposed to increase convergence robustness, but it seems that I’ve somehow achieved the opposite result by adding the simple constraint given above. I’m hoping it’s just a trivial mistake I’ve made somewhere, so thought I’d see if trust regions causing problems to blow up are common with beginners in this field.