Is there a stochastic analogue of Zangwill's global convergence theorem for deterministic descent algorithms?

163 Views Asked by At

Zangwill's well-known global convergence theorem (Zangwill, W. I. 1969. Nonlinear Programming: a Unified Approach. Englewood-Cliffs, N.J.: Prentice-Hall) provides sufficient conditions under which global convergence can be proven for generic deterministic descent algorithms.

Is there an analogous result that applies to stochastic optimisation algorithms? In particular, to online stochastic gradient descent, and mini-batch stochastic gradient descent?