In gradient descent, when optimizing a convex function with a global minimum, one often assumes either that
the function is Lipschitz, or
that its gradients are Lipschitz.
There are examples where 2 does not imply 1. Are there any examples where 1 does not imply 2 (restricted to convex case with a global min)?