Prove for a specific matrixexponential

128 Views Asked by At

Let $A= \begin{pmatrix} x & y\\ y & x \end{pmatrix} x,y \in \Bbb R$

show the following: $$\exp(A) = \exp(x)\begin{pmatrix} \cosh(y) & \sinh(y)\\ \sinh(y) & \cosh(y) \end{pmatrix} $$

So far I don´t really know how to start, I´ve only worked with the Jordanmatrix and without variables thus lead me here to maybe get some tips on how to begin. I know there is an equivalent such that $\cosh(y)=\begin{equation} \frac{\exp(y)+\exp(-y)}{{2}} \end{equation}$ and $\sinh(y)=\begin{equation} \frac{\exp(y)-\exp(-y)}{{2}} \end{equation}$ but I really don´t see if it helps me in any way.

Thank you in advance.

2

There are 2 best solutions below

3
On BEST ANSWER

Set

$P = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}; \tag 1$

then

$P^2 = I; \; P^3 = PP^2 = PI = P; \; P^4 = PP^3 = P^2 = I, \; P^5 = PP^4 = PI = P, \tag 2$

and so forth; indeed, in general we have, for $n$ a non-negative integer,

$P^{2n} = (P^2)^n = I^n = I, \tag 3$

$P^{2n + 1} = PP^{2n} = PI = P; \tag 4$

we see that the even powers of $P$ are all $I$, and the odd powers are all $P$ itself; there are no other possibilities. Furthermore since $P$ and $I$ commute with one another, that is, since $PI = IP$, we also have $xI$ commuting with $yP$ for $x, y \in \Bbb R$, and thus with

$A = \begin{bmatrix} x & y \\ y & x \end{bmatrix} = xI + yP, \tag 5$

we have

$e^A = e^{xI + yP} = e^{xI} e^{yP}; \tag 6$

that (6) binds follows from the fact that for commuting matrices $C$ and $D$, that is, for $CD = DC$, we may prove

$e^{C + D} = e^C e^D \tag 7$

from the power series for $\exp(\cdot)$,

$e^C = \displaystyle \sum_0^\infty \dfrac{C^n}{n!} \tag 8$

exactly as in the case $c, d \in \Bbb R$:

$e^{c + d} = e^c e^d; \tag 9$

the algebra involved is essentially the same in either case; the details may easily be found elsewhere.

Since

$e^{xI} = e^xI, \tag{10}$

we only need compute

$e^{yP} = \displaystyle \sum_0^\infty \dfrac{(yP)^n}{n!} = \sum_0^\infty \dfrac{y^nP^n}{n!} = \sum_0^\infty \dfrac{y^{2n}}{(2n)!}P^{2n} + \sum_0^\infty \dfrac{y^{2n + 1}}{(2n + 1)!}P^{2n + 1}$ $=\displaystyle \sum_0^\infty \dfrac{y^{2n}}{(2n)!}I + \sum_0^\infty \dfrac{y^{2n + 1}}{(2n + 1)!}P = \left (\sum_0^\infty \dfrac{y^{2n}}{(2n)!} \right ) I + \left ( \sum_0^\infty \dfrac{y^{2n + 1}}{(2n + 1)!} \right )P; \tag{11}$

since

$\cosh y = \sum_0^\infty \dfrac{y^{2n}}{(2n)!}, \; \sinh y = \sum_0^\infty \dfrac{y^{2n + 1}}{(2n + 1)!} \tag{12}$

(see this web page), we find

$e^{yP} = (\cosh y) I + (\sinh y) P = \begin{bmatrix} \cosh y & \sinh y \\ \sinh y & \cosh y \end{bmatrix}; \tag{13}$

therefore,

$e^A = e^x I e^{yP} = e^x \begin{bmatrix} \cosh y & \sinh y \\ \sinh y & \cosh y \end{bmatrix}. \tag{14}$

Nota Bene: We see in the above that by using abstract properties of the matrix $P$, we may avoid calculating its eigenvalues and eigenvectors. Furthermore, the method used here will apply to any matrix $P$ such that $P^2 = I$; $P$ does not have to take the form (1). Indeed we may find all possible such $P$ by writing

$P = \begin{bmatrix} p_{11} & p_{12} \\ p_{21} & p_{22} \end{bmatrix}, \tag{15}$

from which

$P^2 = \begin{bmatrix} p_{11} & p_{12} \\ p_{21} & p_{22} \end{bmatrix} \begin{bmatrix} p_{11} & p_{12} \\ p_{21} & p_{22} \end{bmatrix} = \begin{bmatrix} p_{11}^2 + p_{12}p_{21} & p_{12}(p_{11} + p_{22}) \\ p_{21}(p_{11} + p_{22}) & p_{22}^2 + p_{12}p_{21}\end{bmatrix} = I; \tag{16}$

thus

$p_{11}^2 + p_{12}p_{21} = p_{22}^2 + p_{12}p_{21} = 1, \tag{17}$

and

$p_{12}(p_{11} + p_{22}) = p_{22}(p_{11} + p_{22}) = 0; \tag{18}$

if

$p_{11} + p_{22} \ne 0, \tag{19}$

then

$p_{12} = p_{21} = 0, \tag{20}$

and so (17) yields

$p_{11}^2 = p_{22}^2 = 1; \tag{21}$

(19) and (21) together imply

$p_{11} = p_{22} = \pm 1, \tag{22}$

so that

$P = \pm I; \tag{23}$

on the other hand, when

$p_{11} + p_{22} = 0, \tag{24}$

we may take

$p_{22} = -p_{11} = -\alpha; \tag{25}$

thus

$p_{11}^2 + p_{12}p_{21} = \alpha^2 + p_{12}p_{21} = 1, \tag{26}$

so that setting

$p_{12} = \beta \ne 0, \tag{27}$

we find

$p_{21} = \dfrac{1 - \alpha^2}{\beta}; \tag{28}$

in this case, $P$ takes the form

$P = \begin{bmatrix} \alpha & \beta \\ \dfrac{1 - \alpha^2}{\beta} & -\alpha \end{bmatrix}. \tag{29}$

If we choose

$p_{12} = \beta = 0, \tag{30}$

then (25) and (26) force

$p_{11} = - p_{22} = \pm 1, \tag{31}$

leaving $p_{21}$ undetermined. In this situation we may set

$P = \pm \begin{bmatrix} 1 & 0 \\ \gamma & -1 \end{bmatrix}; \tag{32}$

if instead we choose

$p_{21} = 0, \tag{33}$

we obtain

$P = \pm \begin{bmatrix} 1 & \gamma \\ 0 & -1 \end{bmatrix}. \tag{34}$

We have thus covered all possible cases for the matrix $P$ with $P^2 = I$. End of Note.

1
On

Note that $(1,-1)$ and $(1,1)$ are eigenvectors of $A$, with eigenvalues equal to $x-y$ and to $x+y$. Therefore, if$$P=\begin{pmatrix}1&1\\-1&1\end{pmatrix}$$(the columns of $P$ are the eigenvectors), then$$P^{-1}.A.P=\begin{pmatrix}x-y&0\\0&x+y\end{pmatrix},$$and therefore$$P^{-1}.e^A.P=\begin{pmatrix}e^{x-y}&0\\0&e^{x+y}\end{pmatrix}=e^x\begin{pmatrix}e^{-y}&0\\0&e^y.\end{pmatrix}$$So,$$e^A=e^xP.\begin{pmatrix}e^{-y}&0\\0&e^y.\end{pmatrix}.P^{-1}=e^x\begin{pmatrix}\cosh y&\sinh y\\\sinh y&\cosh y\end{pmatrix}.$$