Explicit computation of norm of matrix exponential for a concrete matrix

166 Views Asked by At

I am interested in the explicit computation of the following norm

$$ |x| := \sup_{t \geq 0} \left\Vert \mathrm {e}^{-At}x \right\Vert_2,$$

$x = (x_1, x_2)^{\top} \in \mathbb{R}^n, A \in \mathbb{R}^{n \times n}$ and $\Vert \cdot \Vert_2$ denotes the standard Euclidean norm. Concretely for the matrix

$$ {A} := \begin{pmatrix} \frac{19}{20} & -\frac{3}{10} \\ \frac{3}{10} & -\frac{1}{20} \end{pmatrix}. $$

This matrix is obviously non-symmetric but positive stable, the Eigenvalues are $\lambda_1 = 1/20$ and $\lambda_2 = 17/20.$ For symmetric and anti-symmetric matrices this norm would coincide with the Euclidean norm but not for this matrix. When computing this problem, the expressions quickly become very intricate. Whereby I chose the approach by explicitly calculating the matrix exponential and subsequent maximization by differentiation, which unfortunately leads to an irrepressible expression. So is there an intelligent way of calculating this norm that leads to a pleasant expression?

Would be very grateful for any help!

EDIT: In this case $$ \mathrm{e}^{-At} = \frac{1}{8} \left(\begin{array}{rr} -\mathrm{e}^{-\frac{1}{20} t} + 9\mathrm{e}^{-\frac{17}{20} t} & 3\mathrm{e}^{-\frac{1}{20} t} - 3\mathrm{e}^{-\frac{17}{20} t} \\ -3\mathrm{e}^{-\frac{1}{20} t} +3 \mathrm{e}^{-\frac{17}{20} t} & 9\mathrm{e}^{-\frac{1}{20} t} - \mathrm{e}^{-\frac{17}{20} t} \\ \end{array} \right). $$ And $ \left \Vert \mathrm{e}^{-At} \left(\begin{array}{rr} x_1 \\ x_2 \end{array} \right) = \right \Vert_2 $ $$ =\frac{1}{8} \sqrt{\left((-\mathrm{e}^{-\frac{1}{20} t} + 9\mathrm{e}^{-\frac{17}{20} t})x_1 + (3\mathrm{e}^{-\frac{1}{20} t} - 3\mathrm{e}^{-\frac{17}{20} t})x_2 \right)^2 + \left((-3\mathrm{e}^{-\frac{1}{20} t} + 3\mathrm{e}^{-\frac{17}{20} t})x_1 + (9\mathrm{e}^{-\frac{1}{20} t} - \mathrm{e}^{-\frac{17}{20} t})x_2 \right)^2} $$ becomes rather cumbersome to work with.

2

There are 2 best solutions below

0
On

$ \def\l{\sigma}\def\s{\lambda} \def\LR#1{\left(#1\right)} \def\sym#1{\operatorname{sym}\LR{#1}} \def\skew#1{\operatorname{skew}\LR{#1}} \def\trace#1{\operatorname{trace}\LR{#1}} \def\qiq{\quad\implies\quad} \def\p{\partial} \def\grad#1#2{\frac{\p #1}{\p #2}} \def\dgrad#1#2{\frac{d#1}{d#2}} \def\m#1{\left[\begin{array}{r}#1\end{array}\right]} \def\c#1{\color{red}{#1}} \def\CLR#1{\c{\LR{#1}}} \def\fracLR#1#2{\LR{\frac{#1}{#2}}} $For typing convenience, define the following variables and their derivatives with respect to $t$ $$\eqalign{ E &= \exp(-At) &\qiq \dot E = -AE \\ w &= Ex &\qiq \dot w = -AEx = -Aw \\ }$$ Square the objective function $(\l)$ and calculate its derivative $$\eqalign{ \l^2 &= |x|^2 = \|Ex\|^2_2 = w^Tw \\ 2\l\;\dot\l &= 2w^T\dot w = -2w^TAw \\ \dot\l &= -\fracLR{w^TAw}{\l} \\ \frac{\l}{\dot\l} &= -\fracLR{\l^2}{w^TAw}= -\fracLR{w^Tw}{w^TAw} \\ }$$ Use this derivative with Newton's method to numerically calculate the maximum of $\l$. The iterations are tricky since the denominator goes to zero near the solution, so the steplength $\s_k$ needs to be really tiny to prevent divergence. $$\eqalign{ w_{k} &= \exp(-At_k)\,x \\ t_{k+1} &= t_k - \s_k\fracLR{w_k^Tw_k}{w_k^TAw_k} \\ }$$

0
On

Let $A$ be a 2-by-2 matrix with positive eigenvalues $\lambda_1, \lambda_2$. Given $x \in \mathbb{R}^2$, there exist eigenvectors $v_1$ and $v_2$ such that $$ x = v_1 + v_2. $$ We want to maximize \begin{align*} f(t) &= |e^{-At}x|^2\\ &= (e^{-At}x)\cdot(e^{-At}x) \end{align*} for $t \ge 0$. Differentiating, we get \begin{align*} f'(t) &= -2(e^{-At}x)\cdot A(e^{-At}x)\\ &= -2(e^{-\lambda_1t}v_1 + e^{-\lambda_2t}v_2)\cdot (\lambda_1e^{-\lambda_1t}v_1 + \lambda_2e^{-\lambda_2t}v_2)\\ &= -2(\lambda_1e^{-2\lambda_1t}|v_1|^2 + \lambda_2e^{-2\lambda_2t}|v_2|^2 + (\lambda_1+\lambda_2)e^{-(\lambda_1+\lambda_2)t}v_1\cdot v_2). \end{align*} Therefore, if we set $$ \alpha = e^{(\lambda_2-\lambda_1)t}, $$ then \begin{align*} -\frac{1}{2}e^{(\lambda_1+\lambda_2)t}f'(t) &= \lambda_1|v_1|^2e^{(\lambda_2-\lambda_1)t} + \lambda_2|v_2|^2e^{(\lambda_1-\lambda_2)t} + (\lambda_1+\lambda_2)v_1\cdot v_2\\ &= \lambda_1|v_1|^2 \alpha + \lambda_2|v_2|^2\alpha^{-1} + (\lambda_1+\lambda_2)v_1\cdot v_2\\ &= \frac{\lambda_1|v_1|^2\alpha^2 + (\lambda_1+\lambda_2)(v_1\cdot v_2)\alpha + \lambda_2|v_2|^2}{\alpha}. \end{align*} This is zero only if \begin{align*} \alpha &= \frac{-(\lambda_1+\lambda_2)(v_1\cdot v_2) \pm \sqrt{(\lambda_1+\lambda_2)^2(v_1\cdot v_2)^2 - 4\lambda_1|v_1|^2\lambda_2|v_2|^2}}{2\lambda_1|v_1|^2}\\ &= \frac{(\lambda_1+\lambda_2)|v_1||v_2|}{2\lambda_1|v_1|^2} \left(-\frac{v_1\cdot v_2}{|v_1||v_2|} \pm \sqrt{\frac{(v_1\cdot v_2)^2}{|v_1|^2|v_2|^2} - \frac{4\lambda_1\lambda_2}{(\lambda_1+\lambda_2)^2}}\right) \end{align*} Let $$ \beta = \frac{2\sqrt{\lambda_1\lambda_2}}{\lambda_1+\lambda_2} $$ and $\theta$ be the angle from $v_1$ to $v_2$, i.e., $$ \cos\theta = \frac{v_1\cdot v_2}{|v_1||v_2|}. $$ Observe that $\cos\theta$ is independent of $x$. The formula for $\alpha$ becomes $$ \alpha = \frac{(\lambda_1+\lambda_2)|v_1||v_2|}{2\lambda_1|v_1|^2}(-\cos\theta \pm \sqrt{(\cos\theta)^2-\beta^2}). $$ From this, we can draw the following conclusions, which depend only on the matrix $A$:

  1. If $$ \cos\theta \ge \beta, $$ then the solutions are negative and therefore $f$ has no critical points for $t \ge 0$. The maximum of $f$ is at $t = 0$.
  2. If $$ -\beta < \cos\theta < \beta, $$ then there are no solutions and therefore $f$ has no critical points. The maximum of $f$ is at $t = 0$.
  3. If $$ \cos\theta < -\beta, $$ there are two solutions and both are positive.
  4. If $$ \cos\theta = -\beta, $$ then there is one positive solution.

In the last two cases, you can check whether, if $\alpha$ is a solution, $$ t = \frac{\log \alpha}{\lambda_2-\lambda_1} > 0 $$ and $f(t) > f(0)$. It is only in the last inequality where the value of $x$ matters.

In your specific example, $$ \cos\theta = \frac{3}{5} > \beta = \frac{\sqrt{17}}{9}, $$ and therefore, the maximum occurs at $t = 0$ and is equal to $f(0) = |x|$.