Any way to simplify the matrix integral $\int_{0}^{\infty}e^{A z}e^{B z}dz$ if A and B do not commute but are diagonalizable>

295 Views Asked by At

Define $A$ and $B$ square matrices where all eigenvalues are $< 0$ for both, and there is no eigenvalue multiplicity. Completely diagonalizable, etc. But assume that $A$ and $B$ do not commute.

Is there any way to simplify the expression: $\int_{0}^{\infty}e^{A z}e^{B z}dz$?

Extension From the answers, it looks like the Sylvester equation gives the solution here. I suspect that if we add a matrix multiplied in the middle of the exponents, that it just adds a term to the sylvester equation.

My problem actually looks more like: $$ \int_{0}^{\infty}e^{A z}\cdot D\cdot e^{B z}\cdot E\cdot e^{C z}dz $$ Where $A,B,C$ all have negative eigenvalues and none commute, and $D,E$ are as well-behaved as you wish. Is there a generalization of the sylvester equation for this setup (no worries about the solution to the matrix equation with vec's etc. as I will figure out how to solve it numerically if required)

2

There are 2 best solutions below

1
On BEST ANSWER

\begin{eqnarray} \int_{0}^{\infty} e^{Az} e^{Bz}\; dz &=& \int_0^{\infty} \left(\sum_{n=0}^{\infty} \frac{1}{n!} A^n z^n \right) e^{Bz}\; dz \\ &=& \sum_{n=0}^{\infty} \frac{1}{n!} A^n \int_0^{\infty} z^n e^{Bz}\; dz \\ &=& \sum_{n=0}^{\infty} \frac{1}{n!} A^n \cdot n! (-B)^{-(n+1)} \\ &=& -\left(\sum_{n=0}^{\infty} (-1)^n A^n B^{-n} \right) B^{-1}. \end{eqnarray}

Define $M\equiv \sum_{n=0}^{\infty} (-1)^nA^n B^{-n}$. Observe \begin{eqnarray} A M B^{-1} &=& -\sum_{n=1}^{\infty} (-1)^nA^n B^{-n} \\ &=& I - M. \end{eqnarray} Thus, we have the relationship $AMB^{-1} = I - M$ or $$ (*)\ \ \ AM + MB = B. $$ If $A$ and $B$ are $m\times m$ matrices, then so is $M$, and the relation $(*)$ provides a system of $m^2$ linear equations in $m^2$ unknowns. (Each entry $M_{ij}$ of $M$ is an unknown and $(*)$ provides one equation for each of the $m^2$ entries of the matrices on its left and right hand sides.) Hence, $(*)$ can in general be solved for $M$ using standard linear algebra techniques and the value of the integral is $-MB^{-1}$.

Note that if $A$ and $B$ commute, then $M = (A+B)^{-1} B$ and the integral is $-(A+B)^{-1}$, as expected.

In general, there is no way (I'm pretty sure) to solve $(*)$ using standard algebraic operations on $A$ and $B$. In other words, you can't write an expression for $M$ in terms of $A$ and $B$ using only matrix multiplication, addition, inverse, etc. But that doesn't mean the solution of $(*)$ is difficult! It's just a linear system of $m^2$ equations and unknowns. It's still very easy, it's just less elegant than if there were a pretty formula for it.

Extension

For the extension, you can expand everything out in eigenvectors, but unlike the Sylvester equation approach in @user1551's answer, it requires computation of eigenvalues and eigenvectors for each of $A$, $B$, and $C$.

Write \begin{eqnarray} A &=& \sum_i -\lambda_{Ai} u_{Ai}u_{Ai}^T \\ B &=& \sum_i -\lambda_{Bi} u_{Bi}u_{Bi}^T \\ C &=& \sum_i -\lambda_{Ci} u_{Ci}u_{Ci}^T \end{eqnarray} for eigenvalues $-\lambda_{\cdot i}$ and eigenvectors $u_{\cdot i}$. Then \begin{eqnarray} \int_0^{\infty} e^{Az} D e^{Bz} E e^{Cz} \; dz &=& \int_0^{\infty} \left(\sum_i e^{-\lambda_{Ai} z} u_{Ai} u_{Ai}^T \right) D \left(\sum_j e^{-\lambda_{Bj} z} u_{Bj} u_{Bj}^T \right) E \left(\sum_k e^{-\lambda_{Ck} z} u_{Ck} u_{Ck}^T \right)\; dz \\ &=& \sum_{i,j,k} \frac{1}{\lambda_i+\lambda_j+\lambda_k} u_{Ai}u_{Ai}^T D u_{Bj}u_{Bj}^T E u_{Ck} u_{Ck}^T. \end{eqnarray}

This approach my not possess very nice properties for numerical computation. That is, it may not be very accurate and it may be computationally quite expensive, depending on the size of your matrices.

2
On

Here is the usual approach. Let $X = \int_0^t e^{Az}e^{Bz} dz$. As all eigenvalues of $A$ and $B$ are negative, $e^{Az}e^{Bz}\to0$ as $z\to\infty$. Hence $$ AX+XB =\int_0^\infty (Ae^{Az}e^{Bz} + e^{Az}e^{Bz}B) dz =\int_0^\infty d e^{Az}e^{Bz} =-I. $$ That is, $X$ satisfies the Sylvester equation $AX+XB=-I$. The eigenvalues of the linear map $f:X\mapsto AX+XB$ are sums of the form $\lambda_i+\mu_j$, where $\lambda_1,\ldots,\lambda_n$ are the eigenvalues of $A$ and $\mu_1,\ldots,\mu_n$ are the eigenvalues of $B$. It follows that all eigenvalues of $f$ are negative and the Sylvester equation is uniquely solvable. If you vectorises $X$, the equation can be rewritten as $(I\otimes A+B^T\otimes I)\operatorname{vec}(X)=-\operatorname{vec}(I)$. Therefore an explicit solution of $X$ is given by $\operatorname{vec}(X)=-(I\otimes A+B^T\otimes I)^{-1}\operatorname{vec}(I)$.

One can see that the above result is equivalent to Will Nelson's, but the derivation is different from his.