Integral formula for the differential of matrix exponential

411 Views Asked by At

This is a problem from Jacques Faraut's Analysis on Lie Groups.

Given $A,X\in M(n,\mathbb{R})$, put $F(t)=\exp(t(A+X))$. In the first part of the problem we showed that $F$ is a solution to the integral equation $$F(t)-\int_0^t\exp((t-s)A)F(s)\,ds = \exp(tA).$$ This was straightforward. The next few parts are giving me trouble though. We define a sequence of maps $W_k(t)$ by $W_0(t) = \exp(tA)$ and $$W_k(t) = \int_0^t \exp((t-s)A)XW_{k-1}(s)\,ds.\tag{1}$$ The problem is to show that the series $\sum_{k=0}^\infty W_k(t)$ converges for each $t$, and that it converges to $F(t)$. We are then asked to prove the formula $$(D\exp)_AX = W_1(1)=\int_0^1\exp((1-s)A)X\exp(sA)\,ds.\tag{2}$$

I believe I have shown $\sum_{k=0}^\infty W_k(t)$ converges by taking the norm of (1) and applying induction, which let me compare the sum to a power series convergent for fixed $t$. I haven't the faintest clue how to show the series converges to $F(t)$ though. I tried converting $\exp$ to a formal sum and showing that $\sum_{k=0}^\infty W_k(t) = \sum_{j=0}^\infty t^j(X+A)^j/j!$, but the fact that $W_k$ is defined via recursive integration makes this difficult. I'm also unsure of how to prove formula (2), since it was unclear to me how to apply the previous parts.

I've looked into other ways of proving the result (2), and most of them seem to rely on manipulating a two-parameter family $\Gamma(s,t) = \exp(-sX(t))\,\partial_t\!\exp(tX(t))$. My instincts are telling me that these two methods are roughly equivalent, but I can't tease out their relationship. Any hints/help/advice is greatly appreciated.

2

There are 2 best solutions below

0
On

Proof Sketch:

You know $F(t)$ is Lipschitz continuous on $[0,\,1]$.

Let $C^{0,\,1}[0,\,1]$ be the class of all Lipschitz continuous square $n\times n$ matrix valued functions on $[0,\,1]$.

By iterated integration, check that some finite number of iterations of the operator:

$$\mathscr{L}: C^{0,\,1}[0,\,1] \to C^{0,\,1}[0,\,1];\quad \mathscr{L}f = e^{A\,t} + \int_0^1\,\exp((t-s)\,A)\,f(s)\,\mathrm{d} s$$

is contractive. See, for example, proofs of the Picard-Lindlöf theorem on the kinds of tricks one uses to do this.

Next, show that the partial sums of your series are the iterates of the operator beginning with the Lipschitz-continuous estimate $F\approx \mathrm{id}$.

By the contraction mapping principle then you know that the series converges to the unique solution of the integral equation, and you already know a solution, so it has to be the one.


There are, as you already know, many different ways of messing with this kind of idea. See for example Rossmann, "Lie Groups: An introduction through Linear Groups", $\S$1.2, Theorem 1.5, or the generalization to non-matrix Lie groups I have on my website under the title "Friedrich Schur's Deft Trick".


PS: There almost certainly is a more direct proof for your problem that doesn't need the full strength theorems, and I'm a little ashamed of myself that I can't quite see it off the top of my head. But, in general, the kind of technique that almost always clobbers this kind of problem is to work two putatively the same functions into solutions of the same Cauchy problem, then to appeal to Picard-Lindlöf, or to formulate something as the limit of a sequence of iterates of an operator which you can either show to be contractive, or contractive when applied some finite number of times, then appeal to the contraction mapping theorem (this latter procedure is the same, but more general, than the proof of the local version of Picard-Lindlöf).

0
On

First, a way of proving that the sum of the series$$\sum_{k = 0}^\infty W_k(t)$$is equal to $F(t)$. Define the operator $Q$ acting on matrix valued functions as follows:$$(QH)(t) = \int_0^t \exp((t - s)A)XH(s)ds.$$The integral equation can be written$$(I - Q)F = W_0.$$Observe that$$W_k = QW_{k - 1} = Q^k W_0.$$Define$$G = \sum_{k = 0} W_k.$$By using the identity$$(I - Q)(I + Q + \ldots + Q^k) = I - Q^{k + 1},$$one gets, as $k \to \infty$,$$(I - Q)G = W_0.$$Hence $G$ is solution of the integral equation. By uniqueness of the solution, $G = F$.

$W_k$ is a polynomial function of $X$, which is homogeneous of degree $k$, hence $W_1(t)$ is the linear part in the series expansion of $F(t)$, that is the differential.