Suppose that we have a set $Z_R$ of $N$ by $N$ matrices with real elements. We then focus on those members, $\mathbf{S} \in Z_R$, with all real eigenvalues.
We can diagonalize $\mathbf{S}$ with some orthogonal matrix $\mathbf{X}$ \begin{equation*} \mathbf{A}=\mathbf{X^{-1}}\mathbf{S}\mathbf{X} \end{equation*} So we have \begin{equation*} \mathbf{S}=\mathbf{X}\mathbf{A}\mathbf{X^{-1}} \tag{1} \end{equation*}
Ginibre, Reference 1, says
Infinitesimal variations $\mathbf{dA},~\mathbf{dX}$ of $\mathbf{A},~\mathbf{X}$ produce a variation $\mathbf{ dS}$ of $ \mathbf{S}$
\begin{equation*} \mathbf{dS }= \mathbf{X }( \mathbf{dA }+ [\mathbf{dR }, \mathbf{A }]) \mathbf{X^{-1} } \tag{1.4} \end{equation*} where $\mathbf{dR }=\mathbf{X^{-1} } \mathbf{dX } $
I assume, that for matrices $\mathbf{\alpha },~ \mathbf{\beta }$ \begin{equation*} [\mathbf{\alpha },~ \mathbf{\beta }] = \mathbf{\alpha } \mathbf{\beta } - \mathbf{\beta } \mathbf{\alpha } \end{equation*}
I cannot show that (1.4) is correct. My analysis gives, (see other information ) \begin{equation*} \mathbf{dS }= \mathbf{X }\mathbf{dA }\mathbf{X^{-1} }- \mathbf{A }\mathbf{X^{-1} }\mathbf{dX }+\mathbf{A }\mathbf{dX }\mathbf{X^{-1} } \tag{2} \end{equation*}
My question is: Is Ginibre’s (1.4) correct?
Reference 1, Jean Ginibre; Statistical Ensembles of Complex, Quaternion, and Real Matrices. J. Math. Phys. 1 March 1965; 6 (3): 440–449. https://doi.org/10.1063/1.1704292
Other Information
Derivation of (2).
Starting from $\mathbf{(1)}$
\begin{equation*} \mathbf{S}=\mathbf{X}\mathbf{A}\mathbf{X^{-1}} \tag{1} \end{equation*} If $\mathbf{A} ~\to ~ \mathbf{A}+\mathbf{dA}$, $\mathbf{X} ~\to ~ \mathbf{X+dX}$ (it is assumed that $\mathbf{dA}$ is a diagonal matrix) then
\begin{equation*} \mathbf{S+dS}=(\mathbf{X} + \mathbf{dX} ) ( \mathbf{A} +\mathbf{dA} ) (\mathbf{X} + \mathbf{dX})^{-1} \end{equation*}
It is also assumed that ( a binomial thing ) \begin{equation*} (a+x)^n= a^n +na^{n+1}x +\dots \end{equation*} can be used with $a$ and $x$ matrices, so that
\begin{equation*} (\mathbf{X} + \mathbf{dX})^{-1} = \mathbf{X} ^{-1} + (-1) \mathbf{X} ^{-2 }\mathbf{dX} = \mathbf{X} ^{-1} -\mathbf{X} ^{-2~} \mathbf{dX} \end{equation*}
\begin{equation*} \mathbf{S+dS}=(\mathbf{X} + \mathbf{dX} ) ( \mathbf{A} +\mathbf{dA} ) (\mathbf{X} ^{-1} -\mathbf{X} ^{-2}~ \mathbf{dX} ) \end{equation*}
\begin{equation*} \mathbf{S+dS}=\mathbf{X} ( \mathbf{A} +\mathbf{dA} ) (\mathbf{X} ^{-1} -\mathbf{X} ^{-2}~ \mathbf{dX} )+ \mathbf{dX} ( \mathbf{A} +\mathbf{dA} ) (\mathbf{X} ^{-1} -\mathbf{X} ^{-2}~ \mathbf{dX} )\end{equation*} \begin{equation*} ( \mathbf{A} +\mathbf{dA} ) ~(\mathbf{X} ^{-1} -\mathbf{X} ^{-2}~ \mathbf{dX} )= \mathbf{A} \mathbf{X} ^{-1} + \mathbf{dA}\mathbf{X} ^{-1} - \mathbf{A} \mathbf{X} ^{-2}\mathbf{dX} - \mathbf{dA} \mathbf{X} ^{-2} \mathbf{dX} \end{equation*} Using this last equation, in the one just above it, with six terms cancelling, gives $\mathbf{(2)}$
\begin{equation*} \mathbf{dS }= \mathbf{X }\mathbf{dA }\mathbf{X^{-1} }- \mathbf{A }\mathbf{X^{-1} }\mathbf{dX }+\mathbf{A }\mathbf{dX }\mathbf{X^{-1} } \tag{2} \end{equation*}
(2) may be written as. \begin{equation*} \mathbf{dS }= \mathbf{X }\mathbf{dA }\mathbf{X^{-1} }- \mathbf{A }( \mathbf{X^{-1} }\mathbf{dX }-\mathbf{dX }\mathbf{X^{-1} }) \end{equation*} i.e. \begin{equation*} \mathbf{dS }= \mathbf{X }\mathbf{dA }\mathbf{X^{-1} }- \mathbf{A }[ \mathbf{X^{-1} }~,~\mathbf{dX }] \tag{3} \end{equation*}
Ginibre’s Paper.
This is a very much cited paper, It has been cited over 1,000 times, see at
https://scholar.google.com/scholar?cites=12459902200217406920&as_sdt=2005&sciodt=0,5&hl=en
Ginibre’s $\mathbf{(1.4)}$, is actually part of material about a set $Z_C$, of $N$ by $N$ matrices with complex entries, but is also used after this in the paper, in discussing the set $Z_R$.
Your error is assuming matrices commute. They generally don't.
The second place this error appeared is when you substituted "this last equation, in the one just above it"; you should have ended up with $\mathrm{d}X$s in front of other factors, not behind them.
The first place this error occurred is in the "binomial thing." A rational-function / power-series identity which is valid for multiple scalar variables is not necessarily true for matrices, or may need to be written just right. With the socks-and-shoes rule, we can turn it into a valid application of a one-variable identity, though:
$$ (X+\mathrm{d}X)^{-1}=\bigl(X(I+X^{-1}\mathrm{d}X)\bigr)^{-1}=(I+X^{-1}\mathrm{d}X)^{-1}X^{-1} $$
$$ =(I-X^{-1}\mathrm{d}X+\cdots)X^{-1}=X^{-1}+X^{-1}(\mathrm{d}X)X^{-1}. $$
Without knowing the binomial series, we could also reason as follows: if $Y=X^{-1}$, then
$$ Y+\mathrm{d}Y=(X+\mathrm{d}X)^{-1} \implies (X+\mathrm{d}X)(Y+\mathrm{d}Y)=I$$
Multiplying out gives $XY+X\mathrm{d}Y+(\mathrm{d}X)Y=I$. Substituting $Y=X^{-1}$ and solving gives
$$ \mathrm{d}(X^{-1})=X^{-1}(\mathrm{d}X)X^{-1}. $$
Or, we can write $XY=I$ and apply $\mathrm{d}$ as a derivation, use the product rule, and solve.
Now, you can apply $\mathrm{d}$ directly to $S=XAX^{-1}$ as well to get
$$ \mathrm{d}S=(\mathrm{d}X)AX^{-1}+X(\mathrm{d}A)X^{-1}+XA(X^{-1}(\mathrm{d}X)X^{-1}) $$
which you can then check matches equation $(1)$.