Making sense of $x'=Ax\implies x(t)=e^{Jt}x(0)$ when $A$ is nondiagonalizable

243 Views Asked by At

How do I make sense of $x'=Ax$ is solved by $x(t)=e^{Jt}x(0)$ when $A$ is nondiagonalizable.

When we have an $n\times n$ system of ODE's, $\dfrac{dx}{dt}=Ax$ and if $A=VD_AV^{-1}$ is diagonalizable.

Substitute $x=Vy$,

$$ \dfrac{dx}{dt}=Ax\implies V\dfrac{dy}{dt}=AVy\implies \dfrac{dy}{dt}=V^{-1}AVy=D_Ay\\ \begin{bmatrix}y_1'\\\vdots\\y_n'\end{bmatrix}=\begin{bmatrix}\lambda_1&\cdots&0\\\vdots&\ddots&\vdots\\0&\cdots&\lambda_n\end{bmatrix}\begin{bmatrix}y_1\\\vdots\\y_n\end{bmatrix}\implies y_i(t)=c_ie^{\lambda_it}\\ x(t)=Vy(t)=\begin{bmatrix}v_1&\cdots&v_n\end{bmatrix}\begin{bmatrix}c_1e^{\lambda_1t}\\\vdots\\c_ne^{\lambda_nt}\end{bmatrix}\implies x(t)=c_1v_1e^{\lambda_1t}+\cdots+c_nv_ne^{\lambda_nt}\\ x(0)=V\begin{bmatrix}c_1\\\vdots\\c_n\end{bmatrix}=Vc\implies c=V^{-1}x(0) $$ $$ x(t)=V\begin{bmatrix}e^{\lambda_1t}&\cdots&0\\\vdots&\ddots&\vdots\\0&\cdots&e^{\lambda_nt}\end{bmatrix}\begin{bmatrix}c_1\\\vdots\\c_n\end{bmatrix}=V\begin{bmatrix}e^{\lambda_1t}&\cdots&0\\\vdots&\ddots&\vdots\\0&\cdots&e^{\lambda_nt}\end{bmatrix}c=V\begin{bmatrix}e^{\lambda_1t}&\cdots&0\\\vdots&\ddots&\vdots\\0&\cdots&e^{\lambda_nt}\end{bmatrix}V^{-1}x(0)=e^{At}x(0) $$

where $e^{At}=V\begin{bmatrix}e^{\lambda_1t}&\cdots&0\\\vdots&\ddots&\vdots\\0&\cdots&e^{\lambda_nt}\end{bmatrix}V^{-1}$

I think it makes sense why we define $e^{At}=Ve^{D_At}V^{-1}$ the way it is and how it comes into picture and we can also prove the Taylor series expansion is the same thing.

When $A$ is nondiagonalizable $A$ can be decomposed into $A=XJX^{-1}\implies J=X^{-1}AX$ is called the Jordan canonical form and $X$ contains generalized eigenvectors $x_k$ such that $Ax_k=λ_kx_k$ , or $Ax_k=λ_kx_k+x_{k−1}$ and $J=X^{-1}AX=\begin{bmatrix}J_1&\cdots&0\\\vdots&\ddots&\vdots\\0&\cdots&J_s\end{bmatrix}=J_1\oplus\cdots\oplus J_s$ where $J_i=\begin{bmatrix}\lambda_1&1&0&\cdots&0\\0&\lambda_2&1&\cdots&0\\\vdots&\vdots&\vdots&\ddots&\vdots\\0&0&0&\cdots&\lambda_i\end{bmatrix}$

Substitute $x=Az$

$$ x'=Ax\implies Bz'=ABz\\ z'=B^{-1}ABz=Jz\\ \begin{bmatrix}z'_1\\\vdots\\z'_n\end{bmatrix}=\begin{bmatrix}J_1&\cdots&0\\\vdots&\ddots&\vdots\\0&\cdots&J_s\end{bmatrix}\begin{bmatrix}y_1\\\vdots\\z_n\end{bmatrix} $$ I don't think it goes anywhere with it !

In this case how do I make sense of the fact that $x(t)=e^{Jt}x(0)$ solves $x'=Ax$ similar to the one we obtained above ?

2

There are 2 best solutions below

3
On BEST ANSWER

The solution of the equation

$$ x'(t) = A x(t) $$

is

$$ x(t) = e^{t A } x(0) $$ wether or not $A$ is diagonalizable. This can be easily seen differentiating the solution. (Added: I give a proof below). Note that $e^{tA}$ and $A$ commute with each other so there is no problem with that.

The Jordan canonical form is a way to compute the matrix exponential. But that is not needed here.

If there is an invertible matrix $V$ such that

$$ A = V D V^{-1} $$

then

$$ (t A)^n = t^n V D^n V^{-1} $$

and so

$$ \exp{(tA) } = V e^{tD} V^{-1}. $$

Note $D$ can be whatever for this to hold. The point is that when $D$ is in Jordan block form the computation of $e^{tD}$ is greatly simplified.

Added

I will show here that the exponential solves the differential equation. In fact I will show something slightly more general, namely:

Let $B(t)$ be a differentiable matrix valued function. Provided, $B'(t)$ commutes with $B(t)$ (for all $t$ in the needed interval) then we have

$$ \frac{d}{dt} e^{B(t)} = B'(t) e^{B(t)} $$

Proof. -

\begin{align} \frac{d}{dt} [B(t)]^n &= B'(t) [B(t)]^{n-1} + B(t) B'(t) [B(t)]^{n-2} +\ldots \\ & + \ldots [B(t)]^{n-1} B'(t) \\ &= n B'(t) [B(t)]^{n-1} \end{align}

where the last equality stems from commutativity. Hence

\begin{align} \frac{d}{dt} e^{B(t)} &= \frac{d}{dt}\sum_{n=0}^{\infty} \frac{[B(t)]^n}{n!} \\ &= \sum_{n=0}^{\infty} \frac{n B'(t) [B(t)]^{n-1}}{n!} \\ &= B'(t) \sum_{n=1}^{\infty} \frac{[B(t)]^{n-1}}{(n-1)!} \\ &= B'(t) e^{B(t)}. \end{align}

Now to prove the claim simply consider $B(t) = t A$, $\square.$

0
On

Here is an intuition in a $2\times2$ setting:

Say your system is $\dot{y} = Ay$, with $A$ a $2\times 2$ matrix which cannot be diagonalized. $A$ can be put in Jordan normal form, so there is a change of basis matrix $P$ such that $A = PJP^{-1}$. Let's say $J = \begin{aligned} \begin{bmatrix}2 & 1\\\ 0 & 2\end{bmatrix} \end{aligned}$ for the sake of the example.

Then, $\dot{y} = Ay \implies $ if we set $x = P^{-1}y$, then $\dot{x} = P^{-1}\dot{y} = P^{-1}A(Px) = Jx$. So you have the reformulated system:

$\dot{x} = \displaystyle\binom{\dot{x_1}}{\dot{x_2}} = \begin{aligned} \begin{bmatrix}2 & 1\\\ 0 & 2\end{bmatrix} \end{aligned}\displaystyle\binom{x_1}{x_2}$.

Now, if you solve this system by hand, starting with the second row you get:

$x_2(t) = x_2(0) e^{2t}$.

So $\dot{x_1} = 2x_1 + x_2(0) e^{2t} \implies x_1(t) = x_1(0) e^{2t} + x_2(0) te^{2t}$.

So... what you end up getting is: $\displaystyle\binom{x_1}{x_2} = \begin{aligned} \begin{bmatrix}e^{2t} & te^{2t} \\ 0 & e^{2t}\end{bmatrix} \end{aligned}\displaystyle\binom{x_1(0)}{x_2(0)}$, where the matrix in the middle is exactly $e^{Jt}$.

Basically, the jordan normal form is simply another way to encode your system. Which you can always solve by starting backwards and working your way up the diagonal.

Now, when you switch back to your original basis, everything works out fine because, if $A = PJP^{-1}$, then $e^{At} = Pe^{Jt}P^{-1}$ (you can verify this by simply writing out the expression for $e^{At}$ and then factoring out $P$ and $P^{-1}$ on both sides).