In the mathematical optimization literature it is common to distinguish problems according to whether or not they are convex. The reason seems to be that convex problems are guaranteed to have globally optimal solutions, so one can use methods such as gradient (steepest) descent to find such solutions, but I am not convinced.
For example, the function $|x|^{0.001}$ is not convex (see the yellow-shaded area in the picture below) but it has a single global minimizer.

So why is convexity so important in optimization? Why is e.g. quasiconvexity often not enough?
There are many reasons why convexity is more important than quasi-convexity in optimization theory. I'd like to mention one that the other answers so far haven't covered in detail. It is related to Rahul Narain's comment that the class of quasi-convex functions is not closed under addition.
Duality theory makes heavy use of optimizing functions of the form $f+L$ over all linear functions $L$. If a function $f$ is convex, then for any linear $L$ the function $f+L$ is convex, and hence quasi-convex. I recommend proving the converse as an exercise: $f+L$ is quasi-convex for all linear functions $L$ if and only if $f$ is convex.
Thus, for every quasi-convex but non-convex function $f$ there is a linear function $L$ such that $f+L$ is not quasi-convex. I encourage you to construct an example of a quasi-convex function $f$ and a linear function $L$ such that $f+L$ has local minima which are not global minima.
Thus, in some sense convex functions are the class of functions for which the techniques used in duality theory apply.