How to prove $e^{A \oplus B} = e^A \otimes e^B$ where $A$ and $B$ are matrices? (Kronecker operations)

4.2k Views Asked by At

How to prove that $e^{A \oplus B} = $$e^A \otimes e^B$? Here $A$ and $B$ are $n\times n$ and $m \times m$ matrices, $\otimes$ is the Kronecker product and $\oplus$ is the Kronecker sum: $$ A \oplus B = A\otimes I_m + I_n\otimes B, $$ where $I_m$ and $I_n$ are the identity matrices of size $m\times m$ and $n\times n$, respectively.

EDIT: Actually if you go to the page http://mathworld.wolfram.com/KroneckerSum.html it tells us this property is true.

http://digitalcommons.unf.edu/cgi/viewcontent.cgi?article=1025&context=etd

5

There are 5 best solutions below

2
On BEST ANSWER

What is to be proved is the following: $$ e^{A \otimes I_b +I_a \otimes B} = e^A \otimes e^B~$$ where $I_a,A \in M_n$ , $ I_b, B \in M_m$

This is true because $$ A \otimes I_b~~~~\text{and}~~~~ I_a \otimes B$$ commute, which can be shown by using the so called mixed-product property of the Kronecker product. i.e. $$ (A \otimes B)\cdot (C \otimes D) = (A\cdot C) \otimes (B\cdot D)~$$ Here, $\cdot$ represents the ordinary matrix product.

One can also show that for an arbitrary matrix function $f$, $$f(A\otimes I_b) = f(A)\otimes I_b~~~~\text{and}~~~ f(I_b \otimes A) = I_b \otimes f(A)~.$$ Together with the commutative property mentioned above, you can prove your result.

1
On

A way to proceed. If $A$ and $B$ commute they are simultaneously diagonalizable (if they are diagonalizable, otherwise one must fall back to Jordan decomposition). For diagonal matrices the formula is easy, because you reduce to the property of exponential for real numbers.

3
On

If $A$ and $B$ are $n\times n$, then by Taylor expansion we have:

$$e^A=\sum_{k=0}^{\infty}\frac{A^k}{k!}$$

Therefore:

$$e^Ae^B=\sum_{k_1=0}^{\infty}\frac{A^{k_1}}{k_1!}\sum_{k_2=0}^{\infty}\frac{B^{k_2}}{k_2!}$$
$$\Rightarrow e^Ae^B=\sum_{k_1=0}^{\infty}\sum_{k_2=0}^{\infty}\frac{A^{k_1}}{k_1!}\frac{B^{k_2}}{k_2!}$$
$$\Rightarrow =\sum_{k_1=0}^{\infty}\sum_{k_2=0}^{\infty}\frac{(k_1+k_2)!}{(k_1+k_2)!}\frac{A^{k_1}}{k_1!}\frac{B^{k_2}}{k_2!}$$

$$\Rightarrow =\sum_{k_1=0}^{\infty}\sum_{k_2=0}^{\infty}\frac{1}{(k_1+k_2)!}\binom{k_1+k_2}{k2}A^{k_1+k_2-k_2}B^{k_2}$$
Set $k=k_1+k2$ $$\Rightarrow =\sum_{k=0}^{\infty}\frac{1}{k!}(A+B)^{k}=e^{A+B}$$

2
On

First and foremost, the result is not true as stated. It is only true of $A$ and $B$ commute, which is a very restrictive condition for matrices.

To handle the commutative case, one can first consider the formal power series case. In the ring $\Bbb Q[[X,Y]]$ of formal power series with rational coefficients in commuting indeterminates $X,Y$, one defines $\exp(X)$, $\exp(Y)$, and $\exp(X+Y)$ by the usual power series, and the identity $\exp(X)\exp(Y)=\exp(X+Y)$ is easily checked by comparing coefficients of an arbitrary monomial in $X,Y$: both series are equal to $\sum_{k,l\geq0}\binom{k+l}k\frac{X^kY^l}{(k+l)!}$.

Now if one restricts to formal power series with more than exponentially decreasing coefficients, substitution of a concrete value (for instance a matrix) for an indeterminate will give an absolutely convergent power series, whose limit assigns a well defined value to the substitution. If $M$ is your ring of matrices (which is also a topolgical $K$-vector space for $K=\Bbb R$ or $K=\Bbb C$), and $A,B\in M$ commute, then the substitutions $X:=A,Y:=B$ define, for the appropriate subring $R\subset\Bbb Q[[X,Y]]$, a continuous ring homomorphism $f:R\to M$, whose image lies in the commutative subring $K[A,B]$ of $M$ generated by $A,B$. This homomorphism then satifies $f(\exp(S))=\exp(f(S))$ (by the definition of matrix exponentiation), so that applying $f$ to $\exp(X)\exp(Y)=\exp(X+Y)$ gives $\exp(A)\exp(B)=\exp(A+B)$.

1
On

Mumble! Gripe! Once again I seem to have answered the pre-edited version of the question! Ah well, at least I can take consolation in the fact that I do not appear to be alone!

I won't attempt to prove the title assertion, because it is false. I will however give a simple counterexample:

Let

$N_1 = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} \tag{1}$

and

$N_2 = \begin{bmatrix} 0 & 0 \\ -1 & 0 \end{bmatrix}; \tag{2}$

then we have

$N_1^2 = N_2^2 = 0, \tag{3}$

$N_1 N_2 = -\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}, \tag{4}$

and

$N_2 N_1 = -\begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}; \tag{5}$

note that

$N_1 N_2 \ne N_2 N_1. \tag{6}$

From (3) it follows that

$e^{N_1} = I + N_1 \tag{7}$

and

$e^{N_2} = I + N_2, \tag{8}$

so that

$e^{N_1} e^{N_2} = (I + N_1)(I + N_2) = I + N_1 + N_2 + N_1 N_2 = \begin{bmatrix} 0 & 1 \\ -1 & 1 \end{bmatrix}, \tag{9}$

as may be seen by a simple calculation using (1), (2), and (4). We also have the matrix $J$:

$J = N_1 + N_2 = \begin{bmatrix} 0 & 1 \\ -1 & 1 \end{bmatrix}; \tag{10}$

we see that

$J^2 = -I. \tag{11}$

Examining $e^J$, we see that

$e^{(N_1 + N_2)} = e^J = \sum_0^\infty \dfrac{J^n}{n!} = I + J + \dfrac{1}{2}J^2 + . . . + \dfrac{1}{n!}J^n + . . . , \tag{12}$

and by virtue of (11) we see that, term-by-term, the power series for $e^J$ corresponds precisely to that for $e^i$, $i^2 = -1$ the ordinary complex number square root of $-1$. This implies that the classic formula $e^{i\theta} = \cos \theta + i \sin \theta$ applies to (12) so that, when $\theta = 1$, we obtain

$e^J = I \cos (1 \; \text{rad}) + J \sin (1 \; \text{rad}) = \begin{bmatrix} \cos (1 \; \text{rad}) & \sin (1 \; \text{rad}) \\ -\sin (1 \; \text{rad}) & \cos (1 \; \text{rad}) \end{bmatrix} \tag{13}$

wherein $1 \; \text{rad} = 1 \; \text{radian}$. We see from these compuations that

$e^{(N_1 + N_2)} = e^J \ne e^{N_1}e^{N_2}. \tag{14}$

In the event that $AB = BA$, however, the title assertion binds, as may be seen by the following simple argument: let $X$ be the unique matrix solution to

$\dot X = (A + B)X, X(0) = I; \tag{15}$

it is easy to see that

$X(t) = e^{(A + B)t}; \tag{16}$

now setting

$Y(t) = e^{At}e^{Bt} \tag{17}$

we see that

$\dot Y = Ae^{At}e^{Bt} + e^{At}Be^{Bt} =$ $Ae^{At}e^{Bt} + Be^{At}e^{Bt} = (A + B)e^{At}e^{Bt} = (A + B)Y(t), \tag{18}$

since $AB = BA$ allows us to write $e^{At}B = Be^{At}$, swapping $B$ with powers $A^k$ of $A$ on a term-by-term basis. Since $X(t)$ and $Y(t)$ satisfy the same ordinary differential equation with the same initial conditions, we have $X(t) = Y(t)$ for all $t$; taking $t = 1$ now establishes the title assertion that

$e^Ae^B = e^{A+B}. \tag{19}$

Hope this helps. Cheerio,

and as always,

Fiat Lux!!!