Why is convexity more important than quasi-convexity in optimization?
Solution 1:
There are many reasons why convexity is more important than quasi-convexity in optimization theory. I'd like to mention one that the other answers so far haven't covered in detail. It is related to Rahul Narain's comment that the class of quasi-convex functions is not closed under addition.
Duality theory makes heavy use of optimizing functions of the form $f+L$ over all linear functions $L$. If a function $f$ is convex, then for any linear $L$ the function $f+L$ is convex, and hence quasi-convex. I recommend proving the converse as an exercise: $f+L$ is quasi-convex for all linear functions $L$ if and only if $f$ is convex.
Thus, for every quasi-convex but non-convex function $f$ there is a linear function $L$ such that $f+L$ is not quasi-convex. I encourage you to construct an example of a quasi-convex function $f$ and a linear function $L$ such that $f+L$ has local minima which are not global minima.
Thus, in some sense convex functions are the class of functions for which the techniques used in duality theory apply.
Solution 2:
Sophisticated optimization methods are developed to deal with problems where it is not obvious what the solution is. In general, we may not know much about the behavior of the cost function, so we cannot decide on an a-priori base whether there is a unique minimum or not. We need optimization methods to find the or a minimum.
For convex problems, the global behavior is determined by purely local properties. If a function is everywhere locally convex, it is globally convex too. So for such problems, local solutions are sufficient, which make them a lot easier to deal with.