Let $V = p_1 p_2 \cdots p_n$ where $p_i \in [0,1]$ and $\sum p_i = 1$, so the $p_i$ define a discrete probability distribution. Now I'd like to find $p$ that maximizes $V$. Intuitively it makes sense that the maximum must be at $p_i = 1/n$ for all $i$, and it is easy to verify that the gradient of $V$ vanishes there. But now I'd like to prove the reverse: If $V$ is maximal then $p_i = 1/n$.
With Lagrangian mulitpliers it is rather straightforward, but I've tried solving the system you get by setting the gradient to zero but without much success. Can anyone spot a simplification or a nice argument that would let me continue there?
Here's my attempt so far: First I just plugged in the $\sum p_i = 1$ (and for ease of writing $m:= n-1$, and we assume that the index now goes up to $m$). So
$$V = \underbrace{\Big(\prod_i p_i \Big)}_{=:P}\Big(1 - \underbrace{\sum_i p_i}_{=:S}\Big)$$
Actually it should be possible to prove that we have only one local maximum even without the restriction of $p_i \in [0,1]$:
$$0 \overset{!}{=} \frac{\partial V}{\partial p_j} = \frac{P}{p_j} (1-S) + P(p_j-S) $$
assuming all $p_i \neq 0$ (if some $p_i=0$ we can discard it an restart the problem with less variables) we can simplify that to
$$ S-1 = p_j(p_j-S)$$
(Note that this is a system of equations that holds for all $j=1,\ldots,m$, and here $S$ and $P$ depend on $p_j$.)
Now at that point I thought maybe we can use some fixed point iteration argument but didn't manage to find anything. So I'd love to hear if anyone could expand this argument, but I'm also interested in other elegant ways of solving this.
The AM-GM inequality tells you that $$(p_1 p_2 \cdots p_n)^{1/n} \leq \frac 1 n \sum_{i = 1}^n p_i$$ and crucially, that equality is achieved if and only if $p_1 = p_2 = \cdots = p_n$. This fact should suffice to prove both directions (the inequality itself tells you that $p_1 = p_2 = \cdots = p_n$ yields an optimal solution, and the information about when equality is achieved tells you that this is the only optimal solution).