Given $n$ points on the unit interval sampled from a uniform distribution with parameters $(0,1)$, I want to minimise the sum of the difference of consecutive terms in some permutation of these points.
I do this by splitting the line into $N$ intervals, and allocate $x_i$ to the $j$-th interval if it is between $[j/n, (j+1)/n)$. Then, for any two points in the same interval, the distance is less than $1/N$, and the distance between points in any two consecutive intervals is less than $2/N$. By the fact that it's a uniform distribution, we expect the sum of the distances to be less than $N * 2/N < 2$. What I want to know is, is there any formal way of taking into account the fact that assuming there is one point per interval maximises the potential sum of distances of the points? Can I always say that the sum is bounded by 2 for any allocation of points to intervals?
I guess one painful way to do it is to list out all the possibilities. i.e. possibility one is that $n-1$ points are clustered in the $n$-th interval and one point is in the first interval, in which case the distance is bounded by $(n-1)/n + 1 = 2-1/n < 2$..
Since the uniform distribution maximises entropy, is there any link to being able to say something like a uniform dispersion of points maximises sum of distance between the points