Inverse of the anti-commutator

508 Views Asked by At

Let $B \in M_{n \times n}(\mathbb{R})$ be a fixed $n \times n$ symmetric positive matrix. Consider the anti-commutator map

$$f:M_{n \times n}(\mathbb{R})\to M_{n \times n}(\mathbb{R})$$

via

$$f(C)=BC+CB$$

We can prove that $f$ is an injective linear operator and thus invertible. (Find a basis so that B is diagonal and everything is clear.) My question:

Is there a good expression for $f^{-1}$?

I am looking for an expression for $f^{-1}(D)$ as some polynomial or power series in terms of $B, B^{-1}, D$.

2

There are 2 best solutions below

0
On

We can use the more general result solving the Lyapunov equation. We wish to solve the equation $$ BC + CB^H = D $$ for $D$. We changing the sign of both sides, we note that $-B$ is stable (in the sense that its eigenvalues have negative real part). We then find that $$ (-B)C + C(-B)^H = -D \implies\\ C = \int_0^\infty e^{-B\tau}(-D)e^{-B\tau}\,d\tau = -\int_0^\infty e^{-B\tau}De^{-B\tau}\,d\tau $$

So, we may write our inverse as $$ f^{-1}(D) = -\int_0^\infty e^{-B\tau}De^{-B\tau}\,d\tau $$ Of course, this uses a matrix-valued integral, and $e^{-B \tau}$ is implicitly a series on $B$.


If we write this map as a matrix with respect to the canonical basis of $M_{n \times n}$ (as is somewhat explained here), we find that $$ [f]_{\mathcal B} = I \otimes B + B^T \otimes I $$ and so, we're ultimately looking for the inverse of the sum of the two commuting matrices $I \otimes B$ and $B^T \otimes I$. We can get this inverse as a power series, though I'm not sure we can guarantee its convergence.

Let $M = I \otimes B + B^T \otimes I$. We have $$ (I \otimes B^{-1})M = I+ B^T \otimes B^{-1} $$ From there, we have $$ I+ B^T \otimes B^{-1} = \sum_{k=0}^\infty (-1)^k[[B^T] \otimes B^{-1}]^k $$ Moreover, $$ [(I \otimes B^{-1})M]^{-1} = M^{-1}(I \otimes B) = (I \otimes B)M^{-1} $$ All together, $$ M^{-1} = (I \otimes B)\sum_{k=0}^\infty (-1)^k[B^T \otimes B^{-1}]^k $$ Translated back into operators, we have $$ f^{-1}(D) = B \sum_{k=0}^\infty (-1)^k(B^{-k}D + DB^k) $$ However, we have no reason to suspect that this sum should converge. Notably, with the $2$-norm, we find that $$ \|B^T \otimes B^{-1}\| = (\lambda_{max}(BB^T \otimes (BB^T)^{-1}))^{1/2} = \|B\| \cdot \|B^{-1}\| \geq 1 $$


A strategy to get around non-convergence may be as follows: for $t > 0$, the function $$ f_t(C) = t\,BC + CB $$ has inverse $$ f_t^{-1}(C) = t^{-1} B \sum_{k=0}^\infty (-1)^k t^{-k} (B^{-k}D + DB^k) $$ and we can guarantee that this series converges for sufficiently large values of $t$. Perhaps we could then take a limit as $t \to 1^+$ of the resulting expression.

0
On

If $B$ has the eigenvalues $t_1,\dots,t_n$, then the eigenvalues of $f=f_B$ are $(t_i+t_j)_{1\le i\le j}$. So the characteristic polynomial of $f_B$ is $P(X)=\prod_{i,j}(X-t_i-t_j)$. This polynomial is known: $P(X)$ is (maybe up to sign) equal to the resultant of $\chi(Y)$ and $\chi(X-Y)$, where $\chi(X)=\prod_i(X-t_i)$ is the characteristic polynomial of $B$ and the resultant is computed viewing terms as polynomials on $Y$. Write $P(X)=\sum_{i=0}^{n^2}m_iX^i$ ($m_{n^2}=1$; the other coefficients can be explicitly computed using a formula on the resultant, or using roots: e.g. $m_0=(-1)^n\prod_{i,j}(t_i+t_j)$). Then $P(f_B)=0$. So $$f_B^{-1}(C)=-m_0^{-1}\sum_{i=0}^{n^2-1}m_{i+1}f_B^{i}(C)$$ for all $C$.