Time bound for gradient descent

334 Views Asked by At

Have you seen any analytic bound on gradient descent (for number of iterations to achieve to $\epsilon$ error, and possibly based on the form of cost function and initial value)?

Here is the problem; I have a cost function which is being optimized by gradient descent. Someone has changed the cost function, in some ways, and has shown to converge to the main result (the result of gradient descent on the main cost function). I want to compare the convergence-time bounds for these two algorithm ...

Are you aware of any such bounds for similar optimization algorithms ?

1

There are 1 best solutions below

5
On BEST ANSWER

Here's the reference you want: Introductory Lectures on Convex Optimization by Yurii Nesterov. Bounds galore! Performance generally depends on the Lipschitz continuity of the function, as well as any strong convexity behavior. Keep in mind thought that these bounds tend to be worst case. There really is no substitution for experimentation when comparing real-world performance.