Very often people who compare performance of different algorithms in convex optimization use randomly generated data. For instance, this often happens in compressed sensing and signal processing.
Is this practice justified? How one can repeat their experiment? How this can prove that Algorithm 1 is more effective than Algorithm 2. Perhaps, with another instance of random data Algorithm 2 will beat Algorithm 1.
Are there some classical test problem for comparing algorithms? In image processing for this there are several fixed pictures.
Also I'm interested in some test problems in convex optimization that are highly nonlinear, i.e., that are not quadratic plus some simple convex function such as norm or indicator function.
I believe that because there is a vast variety of convex problems with totally different structure. The term "convex problem" does not really define a representation for the family of problems that we can play with. Therefore building a library of convex problems seems impossible. Because nobody built such a library, everybody just solves random problems.
Linear algebra is simpler. All problems of "solve a linear equation" have the same structure - a matrix and a vector. All problems "find the eigenvalues of this matrix" also have the same structure.
P.S - I am not considering specialized or iterative algorithms for equation solving here.