I am using quasi-newton(BFGS) method with wolfe line search conditions to find the optimum for a convex function. At times, when I run the code, it tends to get stuck at a point which is not optimum. I mean, I reach certain point using the quasi newton based descent direction where it's not possible to find the next point which satisfies wolfe conditions even with very small value of alpha to the range of ^-100 .
I am guessing this error may arise due to bad stopping criteria. I am using central differences to approximate the gradient, does the stopping criteria depends upon the step size of central differences?
What should be the stopping criteria for quasi-newton method? Can anyone provide some suggestion about the convergence of quasi newton method?