In the book I am currently reading, it says
Gradient-based methods are useful for one global optimum and no additional local optima: (quasi-)concave for maximums, (quasi-)convex for minimums.
It is clear to me why concave and convex functions work, as it implies that there is only one maximum/minimum and if it is continuous the gradient-based method will probably work well.
However I am struggling with the quasi-convex/quasi-concave functions. But look at the simple example in What's the difference between quasi-concavity and concavity?. We will struggle when applying the gradient-based method, right?
So I assume the quote above is not sufficient. Can I use gradient-based methods on quasi-concave/convex functions and when should I better be careful?