So… I’m trying to find the origins of the unitary bounded lemma and I have a doubt on a matrix inversion of a matrix with a square off-diagonal matrix partition.
Following the reference:
Boyd, S., Balakrishnan, V., & Kabamba, P. (1989). A bisection method for computing the H∞ norm of a transfer matrix and related problems. Mathematics of Control, Signals, and Systems (MCSS), 2(3), 207-219.
Having a set of real matrices $\{A,B,C,D\}$ with sizes $n\times n$ , $n\times p$, $m\times n$, $m\times p$
Equation (4) gives the following relation:
$$ \begin{bmatrix} A & 0 \\ 0 & -A^T\end{bmatrix} + \begin{bmatrix} B & 0 \\ 0 & -C^T \end{bmatrix} \begin{bmatrix} -D & \gamma I_v \\ \gamma I_u & -D^T \end{bmatrix} ^{-1} \begin{bmatrix} C & 0 \\ 0 & B^T \end{bmatrix} = \begin{bmatrix} A-BR^{-1}D^TC & -\gamma BR^{-1}B^T \\ \gamma C^TS^{-1}C & -A^T + C^TDR^{-1}B^T \end{bmatrix} $$
where $R=(D^TD - \gamma^2 I)$ and $S=(DD^T - \gamma ^2 I)$
this only stands as true in the case:
$$ \begin{bmatrix} -D & \gamma I_v \\ \gamma I_u & -D^T \end{bmatrix} ^{-1} = \begin{bmatrix} -R^{-1}D^T & -\gamma R^{-1} \\ -\gamma S^{-1} & -DR^{-1} \end{bmatrix} $$
My question is : how can I obtain the inverse of a matrix with a square off-diagonal matrix partition as the inverse that appears in this case?
Has someone ever had a problem like this one?, Is this a trivial question? where do I search?... tank you
Somehow the matrix:
\begin{bmatrix} - D^T & -\gamma I_u \\ -\gamma I_v & -D \end{bmatrix}
Gives the idea of a generalization of a transpose cofactor matrix… still it feels wrong… also… the “determinant” has two diferent values and appears as a product form both right and left sides.
I have to clarify
the provided identity in such reference is true. you can see that: $$ \begin{bmatrix} -D & \gamma I_v \\ \gamma I_u & -D^T \end{bmatrix} ^{-1} = \begin{bmatrix} -R^{-1}D^T & -\gamma R^{-1} \\ -\gamma S^{-1} & -DR^{-1} \end{bmatrix} $$
You can prove it by
$$ \begin{bmatrix} -R^{-1}D^T & -\gamma R^{-1} \\ -\gamma S^{-1} & -DR^{-1} \end{bmatrix} \begin{bmatrix} -D & \gamma I_v \\ \gamma I_u & -D^T \end{bmatrix} = \begin{bmatrix} R^{-1}D^TD-\gamma^2R^{-1} & -\gamma R^{-1}D^T +\gamma R^{-1}D^T \\ \gamma S^{-1}D - \gamma DR^{-1} & -\gamma^2S^{-1} + DR^{-1}D^T \end{bmatrix} $$
As you can easily prove:$ R^{-1}D^TD - R^{-1}\gamma^2 = R^{-1}(D^TD-\gamma^2 I) = R^{-1}R = I $ .
Also $ -\gamma R^{-1}D^T + \gamma R^{-1}D^T = 0 $ .
And using the inverse series expansion we can see that:
$$ \gamma S^{-1}D - \gamma DR^{-1} = \gamma (DD^T-\gamma^2 I)^{-1}D -\gamma D(D^TD-\gamma^2 I)^{-1} $$ $$ = -\gamma (\gamma^2 I- DD^T)^{-1}D +\gamma D(\gamma^2 I - D^TD)^{-1} $$ $$ =\gamma[\sum_{i=1}^{\infty} (DD^T)^{i-1} (\frac{1}{\gamma^2})^{i}]D - \gamma D[\sum_{j=1}^{\infty} (D^TD)^{j-1} (\frac{1}{\gamma^2})^{j}] $$ $$ =\gamma[\sum_{i=1}^{\infty} (DD^T)^{i-1}D (\frac{1}{\gamma^2})^{i}] - \gamma [\sum_{j=1}^{\infty} D(D^TD)^{j-1} (\frac{1}{\gamma^2})^{j}]=0 $$
And we can also see that:
$$ -\gamma^2S^{-1} + DR^{-1}D^T= \gamma^2 (\gamma^2 I- DD^T)^{-1} - D(\gamma^2 I - D^TD)^{-1}D^T $$ $$ =\gamma^2 [\sum_{i=1}^{\infty} (DD^T)^{i-1} (\frac{1}{\gamma^2})^{i}]- D [\sum_{j=1}^{\infty} (D^TD)^{j-1} (\frac{1}{\gamma^2})^{j}]D^T=(\frac{\gamma^2}{\gamma^2})(DD^T)^{0}=I $$
I Remark
My question is how do they obtain the inverse?...
Is it some generalized Kramer rule, why do I need to make the product from both sides... Anyway... thank you!
I'm going to just do $\gamma = 1$. I think it's not hard to extend it to arbitrary $\gamma$, following these steps.
First, note that we can "rotate" this matrix, e.g.
$$ \left[ \begin{matrix} -D & I \\ I & -D^T \end{matrix} \right] = \left[ \begin{matrix} I & -D \\ -D^T & I \end{matrix} \right] \left[ \begin{matrix} 0 & I \\ I & 0 \end{matrix} \right] $$ and if $M$ is the inverse of the left hand side, then
$$M = \left(\left[ \begin{matrix} I & -D \\ -D^T & I \end{matrix} \right] \left[ \begin{matrix} 0 & I \\ I & 0 \end{matrix} \right]\right)^{-1} =\left[ \begin{matrix} I & -D \\ -D^T & I \end{matrix} \right]^{-1} \left[ \begin{matrix} 0 & I \\ I & 0 \end{matrix} \right] $$
So I'm just going to find the inverse of the "rotated" matrix. Note that this matrix is symmetric, so I know the inverse is also symmetric. Let's say the inverse is $$ N = \left[ \begin{matrix} A & B \\ B^T & C \end{matrix} \right]. $$ Then $I = N^{-1}N = $ $$ \left[ \begin{matrix} A-DB^T & B^T-D^TA \\ DC-B & C-D^TB \end{matrix} \right] $$ which implies the following four relations
$$A-DB^T = I$$ $$C-D^TB = I$$ $$D^TA =B^T$$ $$B = DC$$
Using the first and third to solve for $A$ and the second and fourth to solve for $C$ gives $$ A-DD^TA = I \iff A = (I-DD^T)^{-1} $$ $$ C-D^TDC - I \iff C - (I-D^TD)^{-1}. $$ I think that should yield that relation you have.