Consider the set of probability vectors $$ \mathcal P_n=\Big\lbrace x\in[0,1]^n\,\Big|\,\sum_{i=1}^n x_i=1\Big\rbrace\subset\mathbb C^n $$ for any $n\in\mathbb N$ where $x_i$ is the $i$-th component of $x$. Since $\mathcal P_n$ is a non-empty convex and closed set, there exists a unique vector of minimal norm which, unsurprisingly, turns out to be the equilibrium $\frac1n(1,\ldots,1)$ as a simple consequence of Cauchy-Schwarz.
On the other hand given $x\in \mathcal P_n$ we can define the entropy as in quantum information via
$$ S:\mathcal P_n\to\mathbb R_0^+\qquad x\mapsto-\sum_{i=1}^n x_i\log(x_i). $$
It is well known that the entropy is maximal if and only if $x=\frac1n(1,\ldots,1)$.
Question: I wondered if the entropy being maximized and the norm being minimized by the same unique vector $\frac1n(1,\ldots,1)$ is somehow related? I don't see a direct connection since the logarithm obviously is not linear but I still feel like there might be some kind of link between those two results.
Thanks in advance for any answer or comment!