Given a probability vector $v=(v_1,\ldots,v_n)$ with $1\geq v_i\geq 0$ and $\sum_{i=1}^n v_i=1$ its entropy can be defined as: $$ H(v):=-\sum_{i=1}^nv_i\log v_i $$ I wonder what is known about transformations that leave the entropy of the vector unchanged. That is, given $v$, a map $T:\mathbb R^n \to \mathbb R^n$ that takes probability vectors to probability vectors and verifies $H(v)=H(T(v))$.
A permutation would be a trivial one. For extremal vector $(0,\ldots ,1,\ldots,0)$ this would be the only solution. However, in general, there is an infinity of valid transformations. Is anything known about the structure of these maps?
Here is a visualization of the isoentropy contours on 3 variable simplex (from Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives. Disclaimer: I contributed to a chapter in this book).
You could find what local changes can be applied by studying the tangent of the manifold perhaps.