first thing: I'm an informatics student and know some algebra. However, this seems to be a bit over my head, so please be gentle with me. ;)
I have multiple sets of real variables. Let these sets be $$S_i = \{s_{i_1}, s_{i_2}, \dots\}$$
Also, for each of these sets I have an upper and lower bound $u_i$ and $l_i$ and the constraint that $\forall i \colon l_i < s_{i_1} < s_{i_2} < \dots < u_i$.
What I need to do is assign values to the $s_{i_j}$ such that they are "as evenly distributed as possible", i.e. the standard deviation of $s_{i_{j+1}} - s_{i_j}$ is minimized.
I tried to formulate this as a quadratic program by introducing a variable for the difference of every consecutive pair of $s_{i_j}$'s, i.e. $$d_{i,j} = s_{i_j} - s_{i_{j+1}}$$
Then I formulated the objective as $$\frac{\sum{d_{i,j}^2}}{\sum{|S_i-1|}} - \left(\frac{\sum{d_{i,j}}}{\sum{|S_i-1|}}\right)^2$$
(Yes, this is actually the variance, but whatever...)
Clearly, this is a quadratic objective. Now, my Gurobi solver tells me it cannot solve this since the "Q matrix is not positive semi-definite". :( I do know what positive semi-definiteness means, but I have no clue what exactly the Q matrix is. Also, knowing how the eigenvalues should look like does not really help me in seeing how to solve this problem...
Some googeling told me that this probably means that the optimization problem is not convex (and solving non-convex quadratic optimization problems is hard?)
So my question is: Is this problem really non-convex? If so, can it still be somehow solved? Or do I just have to re-formulate this somehow?
Thanks a lot!
Edit 1: For clarification: I want the $d_{i,j}$ to be as uniform as possible globally, not just within one $S_i$.
I believe we are missing some information:
You can use a different formula for the variance to create a simple convex QP: $$ \begin{array}{l} s_{i,1} = \ell_i + \epsilon\\ s_{i,N_i} = u_i - \epsilon \\ d_{i,j} = s_{i,j} - s_{i,j-1} \\ d_{i,j} \ge \epsilon \\ \mu_i = \frac{1}{N_i-1}\sum_j d_{i,j} \\ v_{i,j} = d_{i,j} - \mu_i \\ \min V_i = \frac{1}{N_i-2} \sum_j v^2_{i,j} \end{array} $$
Here $\epsilon>0$ is a small constant to obey your $<$ restrictions, and $N_i$ is the number of elements in $S_i$. Note that $d$ starts at $d_{i,2}$ (so $d_i$ has length $N_i-1$). Of course we can drop the constant $1/(N_i-2)$ from the objective.