There is an easier way to compute manually $e^{tA}$?

83 Views Asked by At

I have this matrix

$$A:=\begin{bmatrix}0&-1&1\\0&0&1\\-1&0&1\end{bmatrix}$$

And I have tested that $A^3\neq I$ but $A^4=I$, and I want to find $e^{tA}$. Then what I did was

$$e^{tA}=\sum_{k=0}^\infty\frac{(tA)^k}{k!}=I\sum_{k=0}^\infty\frac{t^{4k}}{(4k)!}+A\sum_{k=0}^\infty\frac{t^{4k+1}}{(4k+1)!}+A^2\sum_{k=0}^\infty\frac{t^{4k+2}}{(4k+2)!}+A^3\sum_{k=0}^\infty\frac{t^{4k+3}}{(4k+3)!}$$

but I dont know if I can find $e^{tA}$ in an easier and computable-by-hand way than the above.

The above seems painful to do it manually. So, if someone have some idea to improve the manual computation of $e^{tA}$ I will like to know it.

4

There are 4 best solutions below

0
On BEST ANSWER

Let$$P=\begin{pmatrix}1-i&1+i&0\\-i&i&1\\1&1&1\end{pmatrix}.$$The columns of $P$ are eigenvectors of $A$. Then$$P^{-1}=\frac14\begin{pmatrix}1+i & -1+i & 1-i \\ 1-i & -1-i & 1+i \\-2 & 2 & 2\end{pmatrix}$$and$$P^{-1}.A.P=\begin{pmatrix}i&0&0\\0&-i&0\\0&0&1\end{pmatrix}.$$Therefore$$P^{-1}.e^{tA}.P=\begin{pmatrix}e^{ti}&0&0\\0&e^{-ti}&0\\0&0&e^t\end{pmatrix},$$which means that $e^{tA}$ is equal to$$\begin{pmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{pmatrix},$$where\begin{align}a_{11}&=\cos (t)\\a_{12}&=-\sin (t)\\a_{13} &= \sin (t) \\ a_{21}&= \frac{1}{2} \left(\cos (t)-\cosh (t)+\sin (t)-\sinh (t)\right)\\a_{22} &= \frac{1}{2} \left(\cos(t)+\cosh (t)-\sin (t)+\sinh (t)\right)\\a_{23} &= \frac{1}{2} \left(-\cos (t)+\cosh (t)+\sin (t)+\sinh (t)\right) \\a_{31}&= \frac{1}{2} \left(\cos (t)-\cosh (t)-\sin (t)-\sinh (t)\right)\\ a_{32}&= \frac{1}{2} \left(-\cos (t)+\cosh (t)-\sin (t)+\sinh (t)\right)\\a_{33} &= \frac{1}{2} \left(\cos (t)+\cosh (t)+\sin (t)+\sinh (t)\right).\end{align}

0
On

The usual way to compute analytic functions of matrices is to exploit the fact they behave well under conjugation:

$$\begin{align} f(PAP^{-1}) &= \sum_n a_n (PAP^{-1})^n \\&= \sum_n a_n P A^n P^{-1} \\&= P \left( \sum_n a_n A^n \right) P^{-1} \\&= P f(A) P^{-1} \end{align}$$

and thus

$$ f(A) = P^{-1} f(PAP^{-1}) P $$

The plan is to arrange for $B=PAP^{-1}$ to be a matrix for which $f(B)$ is easy to compute. Diagonal matrices, in particular, are usually the simplest:

$$\begin{align} f\left( \begin{pmatrix}b_1 & 0 & \ldots \\ 0 & b_2 & \ldots \\ \vdots & \vdots & \ddots \end{pmatrix} \right) &= \sum_n a_n \begin{pmatrix}b_1 & 0 & \ldots \\ 0 & b_2 & \ldots \\ \vdots & \vdots & \ddots \end{pmatrix}^n \\&=\sum_n a_n \begin{pmatrix}b_1^n & 0 & \ldots \\ 0 & b_2^n & \ldots \\ \vdots & \vdots & \ddots \end{pmatrix} \\&= \begin{pmatrix}\sum_n a_n b_1^n & 0 & \ldots \\ 0 & \sum_n a_n b_2^n & \ldots \\ \vdots & \vdots & \ddots \end{pmatrix} \\&= \begin{pmatrix}f(b_1) & 0 & \ldots \\ 0 & f(b_2) & \ldots \\ \vdots & \vdots & \ddots \end{pmatrix} \end{align} $$

So if you can diagonalize $A$, this above plan will let you compute any analytic function of $A$.

0
On

If $A$ is any diagonalizable complex $n$ by $n$ matrix with distinct eigenvalues $\lambda_1,\dots,\lambda_k$, then we have $$ e^A=\sum_{p=1}^k\ e^{\lambda_p}\ \prod_{q\ne p}\ \frac{A-\lambda_q}{\lambda_p-\lambda_q}\ . $$ More precisely, $$ f(X):=\sum_{p=1}^k\ e^{\lambda_p}\ \prod_{q\ne p}\ \frac{X-\lambda_q}{\lambda_p-\lambda_q} $$ is the unique polynomial $f(X)\in\mathbb C[X]$ of degree at most $k-1$ such that $e^A=f(A)$. (It is sometimes called the Lagrange Interpolation Polynomial.)

There is an obvious generalization to arbitrary complex $n$ by $n$ matrices. This generalization is essentially given by Taylor's Theorem.

Here is an incomplete statement of the generalization:

If $A$ is any complex $n$ by $n$ matrix with minimal polynomial of degree $k$, then there is a unique polynomial $f(X)\in\mathbb C[X]$ of degree at most $k-1$ such that $e^A=f(A)$. Moreover there is a simple explicit formula for $f(X)$ based on Taylor's Theorem. Also note that $f(X)$ depends only on the minimal polynomial of $A$.

1
On

I did some approach using the fact that the complexification of $A$ have three distinct eigenvalues $\lambda=1,i,-i$ but evading complexification, then I know that exists some $T$ such that

$$T^{-1}AT=\begin{bmatrix}1&0&0\\0&0&-1\\0&1&0\end{bmatrix}=:B$$

due to the decomposition of $A(\Bbb R^3)=\Bbb R^3$ in two orthogonal subspaces. Then I find an invertible $T=\left[\begin{smallmatrix}0&0&2\\1&-1&1\\1&1&1\end{smallmatrix}\right]$. Also I know that

$$\exp\left(t\begin{bmatrix}0&0&0\\0&0&-1\\0&1&0\end{bmatrix}\right)=\begin{bmatrix}1&0&0\\0&\cos t&-\sin t\\0&\sin t&\cos t\end{bmatrix}$$

And decomposing $B=\left[\begin{smallmatrix}0&0&0\\0&0&-1\\0&1&0\end{smallmatrix}\right]+\left[\begin{smallmatrix}1&0&0\\0&0&0\\0&0&0\end{smallmatrix}\right]$ in these two commuting matrices I get

$$e^{tA}=Te^{tB}T^{-1}=T\begin{bmatrix}1&0&0\\0&\cos t&-\sin t\\0&\sin t&\cos t\end{bmatrix}\begin{bmatrix}e^t&0&0\\0&1&0\\0&0&1\end{bmatrix}T^{-1}$$

what is computable by hand.