Initialization of Limited-memory BFGS (using libLBFGS)

636 Views Asked by At

I am using the package libLBFGS in order to minimize an objective function, for which the first derivative (with respect to the optimization variable) is known and computable. I use the default parameters, as shown in the sample code of the above webpage. The code runs and the optimal solution is obtained. The problem lies in the initialization phase. Although for some initial values the optimal solutions are suitable, for different ones (even just a bit different) the optimal solutions differ, sometimes a lot. I would like to ask the following:

a) If the above problem concerns the convexity of the optimization function, could I prove that the LBFGS algorithm will or will not converge?

b) Has anyone else used that library (libLBFGS) for optimizing unconstrained, non-linear before? I am not sure, but I think I am missing something in tuning the algorithm via the API of the library. The documentation of the API is not clear enough, I think...

c) Would you say that implementing the LBFGS from scrath is a feasible/rational choice?

Thanks a lot!

1

There are 1 best solutions below

1
On

I kept trying to study my objective function as well as the way I employ the L-BFGS algorithm to minimize it. I am pretty sure (I haven't proven it yet, but by plotting it in different cases I can confirm that) my objective function is convex. However, the L-BFGS algorithm does not converge to the same solution when I try different initializations. Now it's more stable, but it still does not behave as I would prefer...

Has anyone else worked with the L-BFGS algorithm before? My main issue now is the initialization stage, and why it affects that much the final results ... In addition, could the various parameters of the algorith be responsible for not finding the same solution under different initialization values? Thanks!