This is part of a much larger ODE problem and I want to know how to compute $\exp(Bt)$. Now one method is the brute force, that is calculating eigenvectors of $B$ corresponding to the two imaginary eigenvalues. Then construct the matrix $P$ consisting of the eigenvectors as columns. We can then calculate the inverse $P^{-1}$ and conclude that
$$\exp(Bt)=P\exp(Dt)P^{-1}$$
where $D$ is a diagonal matrix where the diagonals are the eigenvalues of $B$. doing this I get the correct asnwer, however this is tedious and in the notes they recommend the following:
Multiplication and addition of matrices of the form $\begin{bmatrix}a&-b\\b&a\end{bmatrix}$ obey the same rules as multiplication and addition of complex numbers $z=a+bi$. There fore the matrix $\begin{bmatrix}0&-b\\b&0\end{bmatrix}$ corresponds through this correspondence to purely imaginary numbers, and the relation $\exp(ib)=\cos{b}+i\sin{b}$ can be applied leading to the answer
$$\exp(Bt)=\begin{bmatrix}\cos{2t}&-\sin{2t}\\\sin{2t}&\cos{2t}\end{bmatrix}.$$
Can someone show me what is happening here?
Set
$J = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}; \tag 1$
then it is easily seen that
$J^2 = -I; \tag 2$
continuing in this manner we find
$J^3 = -J, \tag 3$
$J^4 = -J^2 = I, \tag 4$
$J^5 = J, \tag 5$
and in general for $n \in \Bbb N$,
$J^{4n + k} = J^{4n}J^k = (J^n)^4J^k = J^k, \tag 6$
where
$0 \le k < 4; \tag 7$
we only need consider $k$ in this limited range since every $m \in \Bbb N$ may be uniquely written as
$m = 4n + k, \; n \in \Bbb N, 0 \le k < 4 \tag 8$
by the classical Euclidean division algorithm. The reader will undoubtedly recognize the pattern in the powers of $J$ as identical to that of the powers of $i \in \Bbb C$:
$i^2 = -1, \; i^3 = -i, \; i^4 = 1, \tag 9$
and of course
$i^{4n + k} = (i^4)^n i^k = i^k; \tag{10}$
furthermore, the matrix $J$ is of unit norm,
$\Vert J \Vert = 1, \tag{11}$
which is readily evident since via (1)
$\left \Vert J\begin{pmatrix} a \\ b \end{pmatrix} \right \Vert^2 = \left \Vert \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} \begin{pmatrix} a \\ b \end{pmatrix} \right \Vert^2 = \left \Vert \begin{pmatrix} -b \\ a \end{pmatrix} \right \Vert^2 = b^2 + a^2 = \left \Vert \begin{pmatrix} a \\ b \end{pmatrix} \right \Vert^2,\tag{12}$
whence
$\left \Vert J\begin{pmatrix} a \\ b \end{pmatrix} \right \Vert = \left \Vert \begin{pmatrix} a \\ b \end{pmatrix} \right \Vert, \tag{13}$
which implies (11).
Now for any
$\omega \in \Bbb R \tag{14}$
we have the power series representation of
$e^{i\omega t} = \displaystyle \sum_0^\infty \dfrac{(i\omega t)^n}{n!}; \tag{15}$
using (9)-(10), this series may be re-arranged as follows:
$\displaystyle \sum_0^\infty \dfrac{(i\omega t)^n}{n!} = \sum_0^\infty \dfrac{(\omega t)^n i^n}{n!}$ $= \displaystyle \sum_0^\infty \dfrac{(\omega t)^{2n}i^{2n}}{(2n)!} + \sum_0^\infty \dfrac{(\omega t)^{2n + 1}i^{2n + 1}}{(2n + 1)!} = \sum_0^\infty \dfrac{(\omega t)^{2n}i^{2n}}{(2n)!} + i\sum_0^\infty \dfrac{(\omega t)^{2n + 1}i^{2n}}{(2n + 1)!}$ $= \displaystyle \sum_0^\infty \dfrac{(-1)^n(\omega t)^{2n}8i}{(2n)!} + i\sum_0^\infty \dfrac{(-1)^n (\omega t)^{2n + 1}}{(2n + 1)!}. \tag{16}$
Those familiar with the Taylor series of basic trigonometric functions will recognize the expression on the extreme right of (16) as $\cos(\omega t) + i\sin(\omega t)$; that is, we have shown that
$e^{¡ \omega t} = \cos(\omega t) + i\sin(\omega t). \tag{17}$
A word about convergence: all of the above series converge absolutely as may be seen via the ratio test; we illustrate this for the last series on the extreme right of (16), that is, for
$\sin(\omega t) = \displaystyle \sum_0^\infty \dfrac{(-1)^n (\omega t)^{2n + 1}}{(2n + 1)!}; \tag{18}$
the ratio of the absolute values of succeeding terms is then
$\rho_n = \dfrac{\vert \omega t \vert^{2n + 3}}{(2n + 3)!} / \dfrac{\vert \omega t \vert^{2n + 1}}{(2n + 1)!} = \dfrac{\vert \omega t \vert^2}{(2n + 2)(2n + 3)} \to 0 \; \text{as} \; n \to \infty, \tag{19}$
independently of $\omega$ and $t$; it is the absolute convergence of these series which ensures the legitimacy of the re-arrangements and re-groupings performed in (16).
We next observe that, by virtue of (2)-(8), every step of the calculation (16) carries through with $i$ replaced by $J$; also, (11) allows us to conclude that the resulting series are also absolutely convergent, and therefore we may write
$e^{\omega t J} = \cos (\omega t) I + \sin (\omega t) J. \tag{20}$
Finally, taking
$\omega = 2, \tag{21}$
we see that
$e^{Bt} = e^{2 t J} = \cos(2t) I + \sin(\omega t) J = \begin{bmatrix} \cos (2t) & -\sin(2t) \\ \sin(2t) & \cos (2t) \end{bmatrix}, \tag{22}$
as was to be shown.
And that is what is happening here!