Online Non-convex optimization

164 Views Asked by At

Can stochastic gradient descent be used for online non-convex optimization? If not, what are the suitable algorithms?

1

There are 1 best solutions below

0
On

Online non-convex optimization is a very intriguing area in both online learning and non-convex optimization. Recently, there are some advances in online non-convex optimization based on the offline optimization oracle.

Roughly speaking, these works assume that there is an offline optimization oracle that can optimize the following problem, $$\hat{w} = \mathop{\arg\min}_{w\in \mathcal{W}} \{ f(w) - \sigma^{\mathrm{T}}w\}.$$

Note that the function $f: \mathcal{W} \mapsto \mathbb{R}$ could be any non-convex function. Then the algorithm based the framework of Follow-the-Perturbed-Leader (FTPL) with calling the offline optimization oracle.

[1] proves that the algorithm achieves an $O(T^{2/3})$ sublinear regret, and [2] further improves the result to the optimal $O(\sqrt{T})$ rate.

Reference

[1] Naman Agarwal, Alon Gonen, and Elad Hazan. Learning in non-convex games with an optimization oracle. In Proceedings of the 32nd Conference on Learning Theory (COLT), pages 18–29, 2019.

[2] Arun Sai Suggala and Praneeth Netrapalli. Online non-convex learning: Following the perturbed leader is optimal. In Proceedings of the 31st International Conference on Algorithmic Learning Theory (ALT), pages 845–861, 2020.