Matrix exponential of non diagonalizable matrix?

9k Views Asked by At

I am currently self-learning about matrix exponential and found that determining the matrix of a diagonalizable matrix is pretty straight forward :).

I do not, however, know how to find the exponential matrix of a non-diagonalizable matrix.

For example, consider the matrix $$\begin{bmatrix}1 & 0 \\ 1 & 1\end{bmatrix}$$

Is there any process of finding the exponential matrix of a non-diagonalizable matrix? If so, can someone please show me an example of the process? :). I am not looking for an answer of the above mentioned matrix (since I just made it up), but rather I'm interested in the actual method of finding the matrix exponential to apply to other examples :)

3

There are 3 best solutions below

1
On BEST ANSWER

There are two facts that are usually used for this computation:

Theorem: Suppose that $A$ and $B$ commute (i.e. $AB = BA$). Then $\exp(A + B) = \exp(A)\exp(B)$

Theorem: Any (square) matrix $A$ can be written as $A = D + N$ where $D$ and $N$ are such that $D$ is diagonalizable, $N$ is nilpotent, and $ND = DN$

With that, we have enough information to compute the exponential of every matrix.

For your example, we have $$ D = \pmatrix{1&0\\0&1} = I, \quad N = \pmatrix{0&0\\1&0} $$ we find that $$ \exp(D) = eI\\ \exp(N) = I + N + \frac 12 N^2 + \cdots = I + N + 0 = I + N $$ So, we have $$ \exp(D + N) = \exp(D) \exp(N) = (eI)(I+N) = e(I+N) = \\ \pmatrix{e&0\\e&e} $$

1
On

In the $2\times2$ case, you can find the exponential of a matrix $A$ without having to decompose it into $BMB^{-1}$ form. In particular, you only need the eigenvalues—you don’t need to find any eigenvectors. There are three cases, as follows.


Distinct Real Eigenvalues: Let $P_1 = (A-\lambda_2I)/(\lambda_1-\lambda_2)$ and $P_2 = (A-\lambda_1I)/(\lambda_2-\lambda_1)$, where $\lambda_1,\lambda_2$ are the eigenvalues. These two matrices are projections onto the eigenspaces corresponding to $\lambda_1$ and $\lambda_2$, respectively. They have some handy properties: $P_1P_2=P_2P_1=0$, $P_1+P_2=I$, and $A=\lambda_1P_1+\lambda_2P_2$. Using these projections, $$\exp(tA)=\exp(t\lambda_1P_1+t\lambda_2P_2)=\mathrm e^{\lambda_1t}P_1+\mathrm e^{\lambda_2t}P_2.$$


Repeated Eigenvalue: Let $G=A-\lambda I$, where $\lambda$ is the eigenvalue. By the Cayley-Hamilton theorem, $(A-\lambda I)^2=0$, so $G$ is nilpotent. Since $G$ and $\lambda I$ commute, $\exp(tA)=\exp(\lambda tI)\exp(tG)$, but $\exp(tG)=I+tG$ (expand using the power series), so $$\exp(tA) = \mathrm e^{\lambda t}(I+tG).$$


Complex Eigenvalues: The eigenvalues are of the form $\lambda=\alpha\pm\mathrm i\beta$, and the characteristic equation is $(\lambda-\alpha)^2+\beta^2=0$. By the Cayley-Hamilton theorem, $(A-\alpha I)^2+\beta^2I=0$, so the traceless matrix $G=A-\alpha I$ satisfies $G^2=-\beta^2 I$. Writing $A=\alpha I+G$, we have $\exp(tA)=\exp(\alpha I)\exp(tG)$. Expanding the latter as a power series and using the above equality, $$ \exp(tG) = I+tG-\frac{\beta^2t^2}{2!}I-\frac{\beta^2t^3}{3!}G+\frac{\beta^4t^4}{4!}I+\dots. $$ The coefficient of $I$ is the power series for $\cos{\beta t}$, while the coefficient of $G$ is the power series for $(\sin{\beta t})/\beta$. Thus, $$ \exp(tG) = \cos{\beta t}\;I+[(\sin{\beta t})/\beta]\;G \\ \text{and} \\ \exp(tA) = \mathrm e^{\alpha t}\exp(tG). $$

0
On

Laplace Transforms approach is an useful for finding $e^{A t}$, whether $A$ is diagonalizable or non-diagonalizable. It is a very useful method in Electrical Engineering.

Basically, we use the formula $$ e^{A t} = \mathcal{L}^{-1}\left[ (s I - A)^{-1} \right] $$

As an illustration, consider the defective matrix $$ A = \left[ \begin{array}{cc} 1 & 0 \\ 1 & 1 \\ \end{array} \right] $$

We find $$ s I - A = \left[ \begin{array}{cc} s - 1 & 0 \\ -1 & s - 1 \\ \end{array} \right] $$ where $s$ is a complex variable.

We find that $$ (s I - A)^{-1} = \left[ \begin{array}{cc} {1 \over s - 1} & 0 \\[2mm] {1 \over (s - 1)^2} & {1 \over s - 1} \\[2mm] \end{array} \right] $$

The inverse Laplace transform of $(s I - A)^{-1}$ is the state transition matrix, which is also the matrix exponential, $e^{A t}$.

Hence, we get $$ e^{A t} = \mathcal{L}^{-1}\left[ (s I - A)^{-1} \right] = \left[ \begin{array}{cc} \mathcal{L}^{-1}\left( {1 \over s - 1} \right) & \mathcal{L}^{-1}(0) \\[2mm] \mathcal{L}^{-1}\left( {1 \over (s - 1)^2} \right) & \mathcal{L}^{-1}\left( {1 \over s - 1} \right) \\[2mm] \end{array} \right] = \left[ \begin{array}{cc} e^{t} & 0 \\[2mm] t e^t & e^t \\[2mm] \end{array} \right] $$