The Matrix Differential equation $$ \frac{dX(t)}{dt} =X(t) \cdot A $$
where
$$X(t)=\large \begin{pmatrix} x_1(t)\\x_2(t)\\x_3(t)\\x_{4}(t)\end{pmatrix},$$ $$A=\large \begin{pmatrix} 0&1&0&0\\-k_{1}&0&k_{2}&0\\0&0&0&1\\k_{3}&0&-k_{4}&0\end{pmatrix}.$$ In here, $k_i\in \mathbb{R} $, but they are ungiven.
Initial condition: $$X(0)=\large \begin{pmatrix} x_{10}\\x_{20}\\x_{30}\\x_{40}\end{pmatrix},$$
We know that the solution is $X(t)=e^{At}X(0)$.
The question I want to ask: How can we find/symbolize fundamental matrix of the equation?
Part A: "traditional solution"
let us re-organize system
$$\begin{pmatrix}\dot x_1(t)\\ \dot x_2(t)\\ \dot x_3(t)\\ \dot x_{4}(t)\end{pmatrix}=\begin{pmatrix} 0&1&0&0\\-k_{1}&0&k_{2}&0\\0&0&0&1\\k_{3}&0&-k_{4}&0\end{pmatrix}\begin{pmatrix} x_1(t)\\x_2(t)\\x_3(t)\\x_{4}(t)\end{pmatrix}$$
by grouping the 2nd and the 4th equation, and, apart, the 1st and 3rd:
$$\tag{1}(a)\begin{cases}\dot x_2 & = & -k_1 x_1 +k_2 x_3\\ \dot x_4 & = & k_3 x_1 -k_4 x_3\end{cases} \ \ \ \ \ \ \ \ \ \ (b)\begin{cases}\dot x_1 & = & x_2\\ \dot x_3 & = & x_4\end{cases} $$ Differentiating the LHSs and RHSs of (1)(a)
$$\tag{2}\begin{cases}\ddot x_2 & = & -k_1 \dot x_1 +k_2 \dot x_3\\ \ddot x_4 & = & k_3 \dot x_1 -k_4 \dot x_3\end{cases}$$
Taking into account (1)(b) in (2), gives
$$\tag{3}\begin{cases}(i)&\ddot x_2 & = & -k_1 x_2 +k_2 x_4\\ (ii)&\ddot x_4 & = & k_3 x_2 -k_4 x_4\end{cases}$$
Let us stop there for a while. If $k_2$ or $k_3$ are either zero or very small, you have two uncoupled harmonic oscillators. This case is illustrated, using results of the 2nd part, on the figure below. Have you seen this very important second order differential equation with sinusoidal solutions ?
If we are not in this case, it suffices to combine $k_4 \times (i) + k_2 \times (ii)$ to obtain
$$\tag{4}k_2 \ddot x_4 \ = \ -k_4 \ddot x_2 + (k_2 k_3 - k_1 k_4) x_2 $$
Now, if we differentiate twice (3)(i) and take (4) into account, we are left with a fourth order differential equation for function $x_2$ which isn't difficult to solve because its characteristic polynomial has the biquadratic form $ar^4+br^2+c=0.$
Having obtained $x_2$, it remains to plug it into (4) to get $x_4$, taking into account the initial conditions.
Part B: I have obtained a general form, not of $\exp(tA)$, but of $\exp(t\Omega)$, where $\Omega$ is the following "shuffled form" of $A$:
$$\tag{5}\Omega=\left(\begin{array}{rr|rr} 0&0&1&0\\0&0&0&1\\ \hline -k_{1}&k_{2}&0&0\\k_{3}&-k_{4}&0&0\end{array}\right)=\left(\begin{array}{rr} 0&I\\B&0\end{array}\right)$$
(using block notation); it is clearly in correspondence with the separation (1)(a) / (1)(b) upwards.
What I have found is that, under the condition that matrix $B$ possesses a square root $C$, i.e. a matrix such that $C^2=B$, then:
$$\tag{6}\exp(t\Omega)=\left(\begin{array}{cc} \cosh(t C)&C^{-1}\sinh(tC)\\ C \sinh(t C)&\cosh(t C)\end{array}\right).$$
where $\sinh(C)$ and $\cosh(C)$ are resp. the matrix $\cosh$ and the matrix $\sinh$, defined by a series (see (Hyperbolic cosine of a matrix)), in the same way the matrix exponential can be defined.
Proof sketch of (6): It comes from series expansion; in fact, powers of $\Omega$ have a rather simple general structure, depending on the parity of the exponent:
$$\Omega=\left(\begin{array}{cc} 0&I\\C^2&0\end{array}\right) \ \ \implies \ \ \Omega^{2k}=\left(\begin{array}{cc} C^{2k}&0\\0&C^{2k}\end{array}\right) \ \ \text{and} \ \ \Omega^{2k+1}=\left(\begin{array}{cc} 0&C^{2k+2}\\C^{2k}&0\end{array}\right)$$
$$\exp(t\Omega)=\sum_{n=0}^{\infty}\tfrac{1}{n!}t^n\Omega^n=\sum_{k=0}^{\infty}\tfrac{1}{(2k)!}t^{2k}\Omega^{2k}+\sum_{k=0}^{\infty}\tfrac{1}{(2k+1)!}t^{2k+1}\Omega^{2k+1}$$
Figure 1: $x_2$ in blue, $x_4$ in red in the case $k_2=k_3=0$.
See also (https://deepblue.lib.umich.edu/bitstream/handle/2027.42/57871/ExponentialExplicitSoTAC1993.pdf?sequence=1 ).