(I'm asking here and not stats.stackexchange because I'd like a mathematical proof of this)
In this question: Prove how to maximize Standard Deviation given a certain mean $\bar{x}$ and set of values; as pointed out by @mathguy in his second paragraph, I assumed that the mean and values were not independent and that the mean was $\frac{a+b}{2}$ where $a$ and $b$ are the minimum and maximimum values of the range respectively.
I'd like to understand how to maximize the SD of $n$ values in a range $[a,b]$ for an arbitrary mean $\bar{x}=y$.
For example, how would you maximize the standard deviation for $5$ values in the range $[0,1]$ with a mean of $0.3$?
Ideally I'd love to have a solution for this specific example (or another) and an understanding for the general solution.
Should we use calc, and if so how? Can we use something else?
Maximising the standard deviation or variance for a given mean is equivalent to maximising the sum of squares of the values for a given sum of the values.
Meanwhile if $b \ge a$ and $\delta \gt 0$, $$(a-\delta)^2 +(b+\delta)^2 $$ $$= a^2-2a\delta+\delta^2+b^2+2b\delta + \delta^2 $$ $$= a^2+b^2 +2(b-a)\delta+2\delta^2 $$ $$\gt a^2+b^2$$
so the sum of squares is maximised if and only if as many terms as possible are as large as possible, even if this makes others smaller.
In your example of five terms in $[0,1]$ with a mean of $0.3$
this make the sum of the terms $5\times 0.3=1.5$
we can make one term as large as possible in the interval, namely $1$, leaving $0.5$ for the sum of the other terms
then we can make a second term as large as possible in the remaining sum, namely $0.5$, leaving $0$ for the sum of the remaining other terms
so the remaining terms are $0$
and the standard deviation is maximised with values of $\{1,0.5,0,0,0\}$