Let $R$ be an (irreducible) root system and $\ell$ a prime. Let $\sigma$ be an element of the Weyl group $W(R)$ (or even the automorphism group $A(R)$) with the following properties:
$\bullet \quad \sigma^\ell = id$ and
$\bullet \quad \sigma$ is $\textit{elliptic}$, meaning that it operates without eigenvalue $1$ on the vector space $V$ generated by $R$.
In other words, the minimal polynomial of $\sigma$ is the cyclotomic polynomial $X^{\ell-1} + X^{\ell-2} + ... + X + 1$.
Then the following assertion is trivial for $\ell=2$, easy for $\ell = 3$, and I think I can show it with increasingly ugly combinatorics for $\ell = 5$ and $7$:
For every $\alpha \in R$, the set $\lbrace \sigma^i(\alpha): 0 \leq i \leq \ell-2 \rbrace$ satisfies the relations of a basis of a root system of type $A_{\ell-1}$. Equivalently, the full orbit $\lbrace \sigma^i(\alpha): 0 \leq i \leq \ell-1 \rbrace$ consists of the nodes of an extended (affine) Dynkin diagram of type $A_{\ell-1}$: a cycle with $\ell$ nodes.
Question: Is there a nice proof for this for all prime $\ell$?
In other words (for $\ell > 2$), there is exactly one $1 \leq j \leq \ell-1$ such that:
the angle between $\alpha$ and $\sigma^j(\alpha)$ as well as between $\alpha$ and $\sigma^{\ell-j}(\alpha)$ is $2\pi/3$;
$\alpha$ is orthogonal to $\sigma^{i}(\alpha)$ for all $i \not \equiv j, \ell-j$ mod $\ell$.
Further (and more or less equivalently), the restriction of $\sigma$ to the vector space spanned by the $\sigma$-orbit of $\alpha$ is a Coxeter element of the rootsystem of type $A_{\ell-1}$ generated by that orbit -- more precisely, if we replace $\sigma$ for the $\sigma^j$ above, then $\sigma = s_\alpha s_{\sigma(\alpha)} ... s_{\sigma^{\ell-2}(\alpha)}$.
I'm vaguely asking for a "nice proof" because if I'm not mistaken, for $\sigma \in W(R)$ (which of course can be assumed for $\ell > 3$), the assertion follows somewhat awkwardly from R.W. Carter: Conjugacy classes in the Weyl group (Compositio Mathematica 25 (1972), p. 1-59 (table 3 in particular). There, basically it is shown that all elliptic elements which are not of the "$A_{\ell-1}$ Coxeter element form" as above have different minimal polynomials.
As said, I think I have it. Actually, we do not even need $\ell$ to be a prime -- the general statement is:
Let $n \in \mathbb{N}$ and $\sigma \in A(R)$ such that $\sigma \in GL(V)$ has minimal polynomial $T^{n-1} + T^{n-2} + ... + T + 1$ (in particular $\sigma^n = id$). Then for every root $\alpha \in R$ such that the $\sigma$-conjugates of $\alpha$ span an $n-1$-dimensional subspace of $V$, after possibly replacing $\sigma$ by some $\sigma^j$ (where $gcd(j,n) = 1$), the roots \begin{gather} \alpha_1 := \alpha, \alpha_2 := \sigma(\alpha), ..., \alpha_{n-1} := \sigma^{n-2}(\alpha) \end{gather} satisfy the relations of a basis of root system of type $A_{n-1}$ (i.e. correspond to the nodes in the associated Dynkin diagram).
The proof goes via induction on the number of divisors of $n$, the start of the induction being the prime case $n =\ell$ in the question. The result is trivially true for $\ell = 2$, so let $\ell \geq 3$. We choose an $A(R)$-invariant scalar product $\langle \cdot , \cdot \rangle$ on $V$. All $\sigma^j(\alpha)$ have the same length, without loss of generality let this length be $1$. $\alpha, \sigma(\alpha), ..., \sigma^{\ell-2}(\alpha)$ are linearly independent, and we have \begin{align*} \sigma^{\ell-1}(\alpha) = - \sum_{i=0}^{\ell-2} \sigma^{i}(\alpha) \end{align*} Multiplying out $\langle \sigma^{\ell-1}(\alpha), \sigma^{\ell-1}(\alpha)\rangle = 1$, using the $\sigma$-invariance and $\langle \alpha, \alpha \rangle =1$, gives \begin{align} \label{lengtheq1} (\ell-1) + 2 \sum_{i=1}^{\ell-2} (\ell-1-i) \langle \alpha, \sigma^{i}(\alpha) \rangle = 1 &&& (*) \end{align} Since $\ell \geq 3$, this means that for some $1 \leq j \leq \ell-2$, $ \langle \alpha, \sigma^{j}(\alpha) \rangle$ must be negative.
(In the general case, we have to ensure here that that $gcd(j,n) = 1$, or, which suffices for that, that $j$ is not a divisor of $n$. Assuming it were, we use the induction hypothesis on $\sigma^j$ to derive a contradiction, because then the space spanned by the $\sigma^i(\alpha)$ would split into $j$ spaces of dimension $(\frac{n}{j}-1)$ each, not enough to span the whole space.)
Replace $\sigma$ by this $\sigma^j$, so we have $\langle \alpha, \sigma(\alpha) \rangle < 0$ and indeed $\langle \alpha, \sigma(\alpha) \rangle =-1/2$ because the roots have the same length 1. But then by $\sigma$-invariance, $\langle \sigma^{\ell-1} (\alpha), \alpha \rangle = -1/2$ too and thus \begin{align} \sigma^{\ell-1} (\alpha) + \alpha = - \sum_{i=1}^{\ell-2} \sigma^{i}(\alpha) &&& (**) \end{align} is a root which further is of length 1 again. We are finished here if $\ell = 3$. To proceed in the case $\ell > 3$, note first that inserting $\langle \alpha, \sigma(\alpha)\rangle =-1/2$ into (*) gives \begin{align*} 2 \sum_{i=2}^{\ell-2} (\ell-1-i) \langle \alpha, \sigma^{i}(\alpha) \rangle = 0 . \end{align*} Further note that \begin{align} \label{symmetryroot} \langle \alpha, \sigma^i (\alpha) \rangle = \langle \alpha, \sigma^{\ell-i} (\alpha) \rangle \end{align} for all $0\leq i \leq \ell$ by $\sigma$-invariance and $\sigma^\ell = id$. Coupling these pairs reduces our equation further to \begin{align} (\ell-2) \cdot \sum_{i=2}^{\ell-2} \langle \alpha, \sigma^{i}(\alpha) \rangle = 0 &&& (***) \end{align} where we can cancel the constant too. Applying $\sigma^{-1}$ to (**), we have the root \begin{align} \label{l-2relation} \sum_{i=0}^{\ell-3} \sigma^{i}(\alpha) \end{align} of length 1, which multiplied out gives \begin{align*} (\ell-2) + 2 \sum_{i=1}^{\ell-3} (\ell-2-i) \langle \alpha, \sigma^{i}(\alpha) \rangle = 1 \end{align*} or, using $\langle \alpha, \sigma(\alpha) \rangle =-1/2$ and coupling the pairs again, \begin{align*} (\ell-3) \cdot \sum_{i=2}^{\ell-3} \langle \alpha, \sigma^{i}(\alpha) \rangle = 0 \end{align*} where again cancelling the constant (we assume $\ell > 3$ now) and comparing with (***) gives: \begin{align} \label{geilo} \langle \alpha, \sigma^{\ell-2}(\alpha) \rangle = \langle \alpha, \sigma^{2}(\alpha) \rangle = 0. \end{align} From this one infers that \begin{align} \label{newestroot} \sigma^{\ell-1} (\alpha) + \alpha + \sigma(\alpha) = - \sum_{i=2}^{\ell-2} \sigma^{i}(\alpha) \end{align} is again a root of length 1. This process can be repeated, showing iteratively that \begin{gather} \langle \alpha, \sigma^{i}(\alpha) \rangle = 0 \end{gather} for $i \in \lbrace 2, ..., \ell-2\rbrace$. This proves the claim.