The inverse of a submatrix of an inverse matrix; how to numerically approach this?

377 Views Asked by At

Suppose that $A$ is a positive-definite symmetric matrix. I want to solve

$$\widetilde{(A^{-1})_k}x = b$$

where $\widetilde{(B)}_k$ is the matrix $B$ but with the $k^{th}$ row and $k^{th}$ column removed. (So if $B \in \mathbb{R}^{n \times n}$, then $\widetilde{(B)}_k \in \mathbb{R}^{(n-1) \times (n - 1)}$.)

The naive approach is to find the inverse of $A$ and remove the $k^{th}$ row and $k^{th}$ column from the result and proceed as normal, but a rule to live by in numerical work is to never take the inverse of a matrix. So what should I do instead?

1

There are 1 best solutions below

0
On

Many algorithms can be formulated in terms of partitioned matrices and many problems can be analyzed in terms of partitioned matrices. Suppose $A$ is symmetric positive definite (SPD) and let $A=LL^T$ denote the Cholesky decomposition. Partition $A$ and $L$ conformally as follows: $$ A = \begin{bmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{bmatrix} = \begin{bmatrix} L_{11} & 0 \\ L_{21} & L_{22} \end{bmatrix} \begin{bmatrix} L_{11}^T & L_{21}^T \\ 0 & L_{22}^T \end{bmatrix} = LL^T.$$ Invoking the term "conformally" saves us the tedium of explicitly listing the dimension of all blocks. We simply have $$A_{11} = L_{11} L_{11}^T, \quad A_{21} = L_{21} L_{11}^T = A_{12}^T, \quad A_{22} = L_{21} L_{21}^T + L_{22} L_{22}^T.$$ From this we deduce that $L_{11}$ is the Cholesky factor of $A_{11}$, i.e, $$ L_{11} = \text{chol}{(A_{11})}.$$ Moreover $$ L_{21} = A_{21} L_{11}^{-T}, \quad L_{22} = \text{chol}(A_{22} - L_{21}L_{21}^T).$$ We observe that different partitionings allow us to derive different variations of the basic Cholsky algorithm. After this preparation we return to the matter at hand. It is straight forward to verify that $$ L^{-1} = \begin{bmatrix} L_{11}^{-1} & 0 \\ -L_{22}^{-1} L_{21} L_{11}^{-1} & L_{22}^{-1} \end{bmatrix}. $$ Let us now define $B = A^{-1}$ and partition $B$ conformally with $A$ and $L$. We have $$B = \begin{bmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \end{bmatrix} = A^{-1} = L^{-T} L^{-1} = \begin{bmatrix} L_{11}^{-T} & -L_{11}^{-T} L_{21}^T L_{22}^{-T} \\ 0 & L_{22}^{-T} \end{bmatrix} \begin{bmatrix} L_{11}^{-1} & 0 \\ -L_{22}^{-1} L_{21} L_{11}^{-1} & L_{22}^{-1} \end{bmatrix}.$$ The expression for $B_{22}$ is merely $$ B_{22} = L_{22}^{-T}L_{22}^{-1}.$$ At this point the connection to the original question become clear. In the case of $k=1$, i.e, omission of the first row and column of $B$, we are left with the block $B_{22}$ which corresponds to the partitioning which assigns only one row to the first block row and all other rows to the second block row. We observe that $$B_{22}^{-1} = L_{22} L_{22}^T = A_{22} - L_{21} L_{21}^T.$$ Since $L_{11}$ is merely a 1-by-1 block, the explicit computation of $L_{21}$ and $B_{22}^{-1}$ is not only immediate but one of the first steps taken by the standard formulation of Cholesky's algorithm.


This covers the special case of $k=1$. I expect that the general case can be reduced to this case by the first row (column) with the kth row (column) of $A$, i.e., by replacing $A$ by $PAP^T$ for a suitable permutation matrix.