Suppose I have $N$ real numbers and I already know their mean, $\bar{x}$:
$$ \bar{x}=\frac{1}{N}\sum_{i=1}^{N}x_i $$
(but I don't know the individual values $x_1,x_2,\dots,x_N)$.
I want to find the smallest possible value of the mean of the squares, $y$: $$ y=\frac{1}{N}\sum_{i=1}^{N}x_i^2 $$
I thought a bit about the case where $N=2$ and it seemed like the answer should be $x_1=x_2=\bar{x}$. For example, if $\bar{x}=5$, then $5^2+5^2=50$, but $4^2+6^2=52$ and $3^2+7^2=58$, and so on.
I feel like this should be a very simple thing to prove for any $N$, but I had to dig into the back of an old (engineering) textbook to recall the method of Lagrange multipliers... which (unless I made a mistake) indeed gave me the answer $x_1=x_2=x_3=\dots=x_N=\bar{x}$. Or, using the vector notation of the textbook: $\underline{x}=\bar{x}\underline{1}$. I wrote my calculations below, just in case this result is wrong.
My question is: Is there a "simpler" way to prove this result (e.g. using basic high school mathematics)? Is it over-complicated to view this as a constrained optimization problem?
Many thanks for any help.
My Calculations:
Function to minimize (subject to constraint): $$f(\underline{x})=\frac{1}{N}\underline{1}^T\underline{x}^2$$
Constraint: $$c(\underline{x})=\frac{1}{N}\underline{1}^T\underline{x}-\bar{x}=0$$
Unconstrained function to minimize: $$h(\underline{x})=f(\underline{x})+\lambda c(\underline{x})$$
Minimize to get $\lambda$: $$\frac{\partial f}{\partial \underline{x}} + \frac{\partial}{\partial \underline{x}}\left( \lambda c(\underline{x}) \right) = \underline{0}$$ $$\frac{2}{N}\underline{x} + \frac{\lambda}{N}\underline{1} = \underline{0}$$ $$\frac{2}{N}\underline{1}^T\underline{x} + \frac{\lambda}{N}\underline{1}^T\underline{1} = 0$$ $$\lambda = -2\bar{x}$$ Substitute $\lambda$ back in to solve for $x$: $$\frac{2}{N}\underline{x} = \frac{2\bar{x}}{N}\underline{1}$$ $$\underline{x} = \bar{x}\underline{1}$$
As you have already noted, the mean of the square of a pair of numbers withe fixed average is attained when both are of equal magnitude. Therefore for any finite set of numbers the same is true, since if two numbers in the set are different, they can be replaced by a better pair. Given that the set is finite, a finite number of steps will suffice.
More direct approach - proof by contradiction: Assume $\sum x^2_i$ is minimum with $x_i^2\ne x_j^2$ for some $i\ne j$. Replace pair with terms of equal average magnitude. The sum will be lower, contradicting minimum assumption.