In P.4 of this technical report there is a equation:
\begin{align} \left.\frac{\partial^{2}}{\partial \omega_{x}\partial\omega_{y}}(\mathbf{R}_{0}\exp\{J(\omega)\}) \right|_{\omega=0} & = \left.\frac{\partial^{2}}{\partial \omega_{x}\omega_{y}}(\mathbf{R}_{0}\left(\sum_{n=0}^{\infty}\{\frac{1}{n!}(J(\omega))^{n}\})\right) \right|_{\omega=0} \\ & = \mathbf{R}_{0}\frac{1}{2}(J(\hat{x})J(\hat{y}) + J(\hat{y})J(\hat{x})) \end{align}
where $\mathbf{R}_{0} \in SO(3)$ which is a Lie group, $\omega=[\omega_{x} \ \omega_{y} \ \omega_{z}]^\intercal \in \mathbb{R}^{3}$, and \begin{align} J(\omega) = \begin{bmatrix} 0 & -\omega_{z} & \omega_{y} \\ \omega_{z} & 0 & -\omega_{x} \\ -\omega_{y} & \omega_{x} & 0 \end{bmatrix} \end{align} BTW, I guess (because there is no explanation in this document) $\hat{x}=[1 \ 0\ 0]^\intercal, \hat{y}=[0 \ 1\ 0]^\intercal$
My questions is: How to derive this? \begin{align} \mathbf{R}_{0}\frac{1}{2}(J(\hat{x})J(\hat{y}) + J(\hat{y})J(\hat{x})) \end{align}
Notice that $J (\hat x) = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 0 \end{bmatrix}$, $J (\hat y) = \begin{bmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ -1 & 0 & 0 \end{bmatrix}$ and $J (\hat z) = \begin{bmatrix} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}$, so that $J (\omega)$ = $\omega_x J(\hat x) + \omega_y J(\hat y) + \omega_z J(\hat z)$.
Since an exponential series may be derived term by term, let us see how to compute $\frac {\partial^2} {\partial \omega_x \partial \omega_y} J(\omega)^n$. Given that $J(\omega)$ does not commute with itself, we may not say that the derivative is $n J(\omega)^{n-1} \frac {\partial^2} {\partial \omega_x \partial \omega_y} J(\omega)$. Nevertheless, Leibniz's formula still holds true, so that
$$\frac {\partial} {\partial \omega_y} J(\omega)^n = \sum \limits _{j=1} ^n J(\omega) \dots \underbrace {\left( \frac {\partial} {\partial \omega_y} J(\omega) \right)} _{\text{position } j} \dots J(\omega) .$$
This implies that
$$\frac {\partial^2} {\partial \omega_x \partial \omega_y} J(\omega)^n = \sum \limits _{i,j=1, \ i \ne j} ^n J(\omega) \dots \underbrace {\left( \frac {\partial} {\partial \omega_x} J(\omega) \right)} _{\text{position } i} \dots J(\omega) \dots \underbrace {\left( \frac {\partial} {\partial \omega_y} J(\omega) \right)} _{\text{position } j} \dots J(\omega) + \\ \sum \limits _{i=1} ^n J(\omega) \dots \underbrace {\left( \frac {\partial^2} {\partial \omega_x \partial \omega_y} J(\omega) \right)} _{\text{position } i} \dots J(\omega) .$$
Since $J (\omega)$ = $\omega_x J(\hat x) + \omega_y J(\hat y) + \omega_z J(\hat z)$, we have that $\frac {\partial^2} {\partial \omega_x \partial \omega_y} J(\omega) = 0$ (so the second sum above is $0$), and $\frac {\partial} {\partial \omega_x} J(\omega) = J (\hat x)$ and $\frac {\partial} {\partial \omega_y} J(\omega) = J (\hat y)$, therefore we may write
$$\frac {\partial^2} {\partial \omega_x \partial \omega_y} J(\omega)^n = \sum \limits _{i,j=1, \ i \ne j} ^n J(\omega) \dots \underbrace {J(\hat x)} _{\text{position } i} \dots J(\omega) \dots \underbrace {J (\hat y)} _{\text{position } j} \dots J(\omega) .$$
Notice that the above formula is valid only for $n \ge 2$, because the terms corresponding to $n=0$ and $n=1$ have been annihilated by the 2nd order derivative.
Now, if $n \ge 3$ and you make $\omega = 0$, then each of the products in the sum above will contain at least one $J(\omega)$ that will also become $0$, so all terms with $n \ge 3$ will vanish when $\omega = 0$, leaving a single survivor - the term corresponding to $n=2$, which is easily seen to be $J(\hat x) J(\hat y) + J(\hat y) J(\hat x)$. Plugging it back in the original formula (where it is preceded by a ${\bf R}_0 \frac 1 2$) gives the desired result.