If all the real parts of the eigenvalues are negative, then indeed $e^{tA}→0$?

1.5k Views Asked by At

Is there a short proof to show if all the real parts of the eigenvalues are negative, then indeed $e^{tA}→0$?

Also I am wondering if $\lim_{t \rightarrow \infty} ||e^{At}||=0 \iff $ the real parts of all eigenvalues are LESS than 0 or can we have equal to 0 if the algabric and geometric multiplicities are the same?

3

There are 3 best solutions below

0
On BEST ANSWER

This is the shortest I could find:

Let $ A = P J P^{-1} $ be the Jordan decomposition of A. Then $ e^{tA} = e^{t P J P^{-1}} = P e^{tJ} P^{-1} $.

Since $J$ is a block diagonal matrix, $e^J$ is also a block diagonal matrix and each of its blocks is the exponential of a block of $J$.

Hence $ \lim_\limits{t \to \infty} e^{tA} = 0 \iff $ for each Jordan block $B$, $ \lim_\limits{t \to \infty} e^tB = 0 $.

A Jordan block $B$ has the form $$ B(m,n)= \begin{cases} \lambda &, n = m \\ 1 &, n = m + 1 \\ 0 \end{cases} $$

where $ \lambda = a + ib $ is an eigenvalue of $A$.

So $$ e^B(m,n)= \begin{cases} \frac{t^{n-m}}{(n-m)!} e^{\lambda t} &, m \le n \\ 0 \end{cases} $$

Therefore $ \lim_\limits{t \to \infty} e^{tB} = 0 \iff \lim_\limits{t \to \infty} e^{\lambda t} = 0 $.

As $e^{i b t}$ is continuous and periodic, and consequently bounded $$ \lim_\limits{t \to \infty} e^{\lambda t} = \lim_\limits{t \to \infty} e^{a t} e^{i b t} = 0 \iff a < 0 $$

Thus we can conlude that $$ \lim_\limits{t \to\infty} ||e^{At}||=0 \iff \eta(A) < 0 $$ where $ \eta(A) $ is the spectral abscissa of $A$.

1
On

If $Av = \lambda v$ then $e^{tA}v = e^{t\lambda}v$.
Let $\lambda = a + ib$. Then $|e^{t\lambda}| = e^{t a}$.

0
On

I'm not too sure this answer makes the cut in terms of shortness; I'll have to credit mucciolo on that score. Having said this, I offer the following, in which I have tried to gather most of the details together in one single writ, which is I think reasonably thorough and, hopefully, not too long.

The problem question indeed expresses a widely used fact which, despite its ubiquity of application, is not generally understood in detail.

Look first at a single Jordan block $B$ of size $b$. It is a $b \times b$ matrix of the form

$B = \lambda I + N, \tag 1$

where

$N^b = 0 \tag 2$

but

$N^{b - 1} \ne 0. \tag 3$

It is easy to compute

$e^{Bt} = e^{(\lambda I + N)t}; \tag 4$

since $IN = NI$ we may maneuver $e^{\lambda I t}$ and $e^{Nt}$ exactly as if $\lambda I$ and $N$ were ordinary numbers (i.e. $1 \times 1$ matrices) as far as multiplication is concerned, and thus

$e^{(\lambda I + N)t} = e^{\lambda I t}e^{Nt} = (e^{\lambda t} I)e^{Nt} = e^{\lambda t}e^{Nt}, \tag 5$

where

$e^{\lambda I t} = \displaystyle \sum_0^\infty \dfrac{(\lambda I t)^n}{n!} = \sum_0^\infty \dfrac{(\lambda t)^n}{n!}I^n = \sum_0^\infty \dfrac{(\lambda t)^n}{n!}I = e^{\lambda t}I; \tag 6$

by (2) and (3) we have

$e^{Nt} = \displaystyle \sum_0^{b - 1} \dfrac{N^n}{n!}t^n, \tag 7$

the entries of which are polynomials in $t$ of degree at most $b - 1$. If we now write

$\lambda = \Re(\lambda) + i \Im(\lambda), \tag 8$

we find

$e^{\lambda t} = e^{\Re(\lambda) t} e^{\Im(\lambda) t}, \tag 9$

whence

$e^{Bt} = e^{(\lambda I + N)t} = e^{\lambda t}e^{Nt} = e^{\Re(\lambda)t} e^{\Im(\lambda)t}e^{Nt}, \tag{10}$

so

$\Vert e^{Bt} \Vert = \Vert e^{\Re(\lambda)t} e^{\Im(\lambda)t}e^{Nt} \Vert = \vert e^{\Re(\lambda)t} \vert \vert e^{\Im(\lambda)t} \vert \Vert e^{Nt} \Vert = e^{\Re(\lambda)t}\Vert e^{Nt} \Vert , \tag{11}$

and by (7),

$\Vert e^{Nt} \Vert = \Vert \displaystyle \sum_0^{b - 1} \dfrac{(Nt)^n}{n!} \Vert \le \sum_0^{b - 1} \dfrac{\Vert N \Vert^n t^n}{n!}, \tag{12}$

so (11) and (12) yield

$\Vert e^{Bt} \Vert \le e^{\Re(\lambda) t} (\displaystyle \sum_0^{b - 1} \dfrac{\Vert N \Vert^n t^n}{n!}); \tag{13}$

with $\Re(\lambda) < 0$ the exponential on the right of (13) dominates the growth of the polynomial and thus

$\Vert e^{Bt} \Vert \le e^{\Re(\lambda) t} (\displaystyle \sum_0^{b - 1} \dfrac{\Vert N \Vert^n t^n}{n!}) \to 0 \; \text{as} \; t \to \infty. \tag{14}$

(14) binds for any Jordan block $B(\lambda)$ corresponding to an eigenvalue $\lambda$ with $\Re(\lambda) < 0$:

$\Vert e^{B(\lambda) t} \Vert \to 0 \; \text{as} \; t \to 0; \tag{15}$

if now $A$ is in Jordan cannonical form and has $k$ eigenvalues $\lambda_i$,$1 \le i \le k$, each of which is associated with a Jordan block $B(\lambda_i)$, so that $A$ may be written in block diagonal form as

$A = \text{diag}(B(\lambda_1), B(\lambda_2), \ldots, B(\lambda_l), \ldots, B(\lambda_{k - 1}), B(\lambda_k))$ $= \begin{bmatrix} B(\lambda_1) & 0 & 0 & \ldots & \ldots & \ldots & 0 \\ 0 & B(\lambda_2) & 0 & \ldots & \ldots & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & \ldots & 0 & B(\lambda_l) & 0 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \ldots & \ldots & 0 & B(\lambda_{k - 1}) & 0 \\ 0 & 0 & \ldots & \ldots & 0 & 0 & B(\lambda_k) \end{bmatrix}; \tag{16}$

it follows that

$e^{At} = \text{diag}(e^{B(\lambda_1) t}, e^{B(\lambda_2) t}, \ldots, e^{B(\lambda_l) t}, \ldots, e^{B(\lambda_{k - 1}) t}, e^{B(\lambda_k) t}); \tag{17}$

if we assume $\Re(\lambda_i) < 0$ for each $\lambda_i$ occurring in (17), then it follows from (14)-(15) that $\Vert e^{B(\lambda_i) t} \Vert \to 0$ as $t \to \infty$ for each $i$; thus

$A(t) \to 0, \tag{18}$

which in fact holds if and only if

$\Vert A(t) \Vert \to 0. \tag{19}$

We may in fact derive bounds for $\Vert A(t) \Vert$ in terms of the $\Vert e^{B(\lambda_i) t} \Vert$ as follows:

Suppose the block diagonal matrix

$M = \text{diag}(L_1, L_2, \ldots, L_m) \tag{20}$

acts in the usual manner as an operator on $\Bbb C^{\text{size}(M)}$, which is equipped with the usual hermitian inner product $\langle \cdot, \cdot \rangle_M$ and norm $\Vert \cdot \Vert_M = \langle \cdot, \cdot \rangle^{1/2}_M$; we note that $M$ may be written as a sum

$M = \text{diag}(L_1, 0, 0, \ldots, 0) + \text{diag}(0, L_2, 0, \ldots, 0) + \ldots + \text{diag}(0, 0, \ldots, L_m)$ $= \displaystyle \sum_1^m \text{diag}(0, 0, \ldots, L_i, 0, \ldots, 0); \tag{21}$

then for any vector $x \in \Bbb C^{\text{size}(M)}$,

$Mx = \displaystyle \sum_1^m \text{diag}(0, 0, \ldots, L_i, 0, \ldots, 0)x, \tag{22}$

whence

$\Vert Mx \Vert_M = \displaystyle \Vert \sum_1^m \text{diag}(0, 0, \ldots, L_i, 0, \ldots, 0)x \Vert_M \le \sum_1^m \Vert \text{diag}(0, 0, \ldots, L_i, 0, \ldots, 0)x \Vert_M; \tag{23}$

also,

$\Vert \text{diag}(0, 0, \ldots, L_i, 0, \ldots, 0)x \Vert_M \le \Vert \text{diag}(0, 0, \ldots, L_i, 0, \ldots, 0) \Vert_M \Vert x \Vert_M. \tag{24}$

Now

$\Vert \text{diag}(0, 0, \ldots, L_i, 0, \ldots, 0) \Vert_M = \sup_{\Vert x \Vert = 1} \Vert \text{diag}(0, 0, \ldots, L_i, 0, \ldots, 0)x \Vert_M, \tag{25}$

and we note that the range of $\text{diag}(0, 0, \ldots, L_i, 0, \ldots, 0)$ lies in a $\text{size}(L_i)$-dimensional subspace $M_i \cong \Bbb C^{\text{size}(M_i)}$ of $\Bbb C^{\text{size}(M)}$; as we denote the inner product and norm on $M$ by $\langle \cdot, \cdot \rangle_M$ and $\Vert \cdot \Vert_M$, respectively so we denote those induced on $M_i$ by $\langle \cdot, \cdot \rangle_i$ and $\Vert \cdot \Vert_i$, we have

$\Vert \text{diag}(0, 0, \ldots, L_i, 0, \ldots, 0)x \Vert_M^2$ $= \langle \text{diag}(0, 0, \ldots, L_i, 0, \ldots, 0)x, \text{diag}(0, 0, \ldots, L_i, 0, \ldots, 0)x \rangle_M$ $= \langle L_i x_i, L_i x_i \rangle_i =\Vert L_i x_i \Vert_i^2, \tag{26}$

where $x_i$ is the component of $x$ in the subspace $M_i$ of $M = \Bbb C^{\text{size}(M)}$ corresponding to the block $L_i$. From (26)

$\Vert \text{diag}(0, 0, \ldots, L_i, 0, \ldots, 0)x \Vert_M = \Vert L_i x_i \Vert_i, \tag{27}$

thus

$\sup_{\Vert x \Vert = 1} \Vert \text{diag}(0, 0, \ldots, L_i, 0, \ldots, 0)x \Vert_M = \sup_{\Vert x \Vert = 1} \Vert L_i x_i \Vert_i, \tag{28}$

and

$\sup_{\Vert x \Vert = 1} \Vert L_i x_i \Vert_i = \sup_{\Vert x_i \Vert = 1} \Vert L_i x_i \Vert_i, \tag{29}$

which holds by virtue of the fact that $L_i:M_i \to M_i$, so that components $x_j \in M_j$ of $x$ in the subspaces $M_j$ normal to $M_i$ make zero contribution to $\sup_{\Vert x \Vert = 1} \Vert L_i x_i \Vert_i$; thus we may without loss of generality restrict $x$ to $M_i$, whence (29) binds. Therefore, by (28) and (29),

$\sup_{\Vert x \Vert = 1} \Vert \text{diag}(0, 0, \ldots, L_i, 0, \ldots, 0)x \Vert_M = \sup_{\Vert x_i \Vert = 1} \Vert L_i x_i \Vert_i; \tag{30}$

from (25),

$\Vert \text{diag}(0, 0, \ldots, L_i, 0, \ldots, 0) \Vert_M = \sup_{\Vert x \Vert = 1} \Vert \text{diag}(0, 0, \ldots, L_i, 0, \ldots, 0)x \Vert_M$ $= \sup_{\Vert x_i \Vert = 1} \Vert L_i x_i \Vert_i = \Vert L_i \Vert_i, \tag{31}$

and therefore (23), with the aid of (24) and (31) yields

$\Vert Mx \Vert_M = \displaystyle \Vert \sum_1^m \text{diag}(0, 0, \ldots, L_i, 0, \ldots, 0)x \Vert_M \le \sum_1^m \Vert \text{diag}(0, 0, \ldots, L_i, 0, \ldots, 0)x \Vert_M$ $\le \displaystyle \sum_1^m \Vert \text{diag}(0, 0, \ldots, L_i, 0, \ldots, 0) \Vert_M \Vert x \Vert_M = \sum_1^m \Vert L_i \Vert_i \Vert x \Vert_M = (\sum_1^m \Vert L_i \Vert_i)\Vert x \Vert_M, \tag{32}$

which shows that

$\Vert M \Vert_M \le \displaystyle \sum_1^m \Vert L_i \Vert_i. \tag{33}$

If we acknowledge that all the apparatus of the subscripts ${}_M$ and ${}_i$ etc. were introduced merely to track the various inner products and norms invoked in the preceding discussion, and to distinguish them one from another, and that such an explicit notation is already implicit in the matrics $M$, $L_i$ and their norms etc., we may drop the subscripts and resort to the greatly simplified expression

$\Vert M \Vert \le \displaystyle \sum_1^m \Vert L_i \Vert, \tag{34}$

where it is understood that the norms are taken with respect to the subspaces on which $M$ and the $L_i$ act.

We may apply (33) to (17) and conclude that

$\Vert e^{At} \Vert \le \displaystyle \sum_1^k \Vert e^{B(\lambda_i)} \Vert, \tag{34}$

and since each $\Vert e^{B(\lambda_i)} \Vert \to 0$ as $t \to \infty$, it follows that

$\Vert e^{At} \Vert \to 0 \; \text{as} \; t \to \infty \tag{35}$

as well. Hence, also, $e^{A t} \to 0$ as $t \to \infty$.

It has been shown above that the negativity of the real parts of all eigenvalues is sufficient for $\Vert e^{At} \Vert \to 0$ as $t \to \infty$. Going the other way, if there is some $\lambda_i$ with $\Re(\lambda_i) \ge 0$, from (11) we see that

$\Vert e^{B(\lambda_i)t} \Vert = e^{\Re(\lambda_i)t}\Vert e^{N_it} \Vert, \tag{36}$

where $B(\lambda_i) = \lambda_i I + N_i$ as in (1)-(15); then since $N_i^{b_i} = 0$ for some $b_i \in \Bbb N$, i.e., since $N_i$ is nilpotent, $e^{N_i t}$ is a polynomial in $t$, and as such, if it is not constant, which happens only when $N_i = 0$ so that $e^{N_i t} = I$ and hence $\Vert e^{N_i t} \Vert = 1$, we have $\Vert e^{N_i t} \Vert \to \infty$ as $t \to \infty$. In any event, whether $e^{N_i t}$ is constant or not, when $\Re(\lambda_i) \ge 0$ the exponential $e^{\lambda_i t}$ is bounded below by $1$ as $t \to \infty$, forcing $\Vert e^{B(\lambda_i) t} \Vert \ge \Vert e^{N_i t} \Vert \not \to 0$ as well. Then $\Vert e^{A t} \Vert \not \to 0$ on the subspace corresponding to $B(\lambda_i)$, and this is sufficient to validate the claim $\Vert e^{A t} \Vert \not \to 0$ in general.

We have now established that

$[\forall i \; \Re(\lambda_i ) < 0] \Longleftrightarrow [\Vert e^{A t} \Vert \to 0] \Longleftrightarrow [e^{A t} \to 0] \tag{37}$

for matrices $A$ in Jordan cannonical form; but since any matrix $M$ may be written

$M = EAE^{-1} \tag{38}$

for some such $A$ with the same eigenvalues as $M$, and then

$e^{M t} = Ee^{A t} E^{-1}, \tag{39}$

it follows that

$[\forall i \; \Re(\lambda_i ) < 0] \Longleftrightarrow [\Vert e^{M t} \Vert \to 0] \Longleftrightarrow [e^{M t} \to 0] . \tag{40}$