This is something I stumbled across in my work when tweaking some matrices and I'm wondering if this is a provable thing.
Say we have a square symmetric invertible matrix:
$$ \begin{bmatrix} a&b&c\\ b&g&b\\ c&b&x\\ \end{bmatrix} $$
If I expand it's rank by adding a row and column of zeros, but maintain a '1' on the diagonal: $$ \begin{bmatrix} a & b & 0 & c \\ b & g & 0 & b \\ 0 & 0 & 1 & 0 \\ c & b & 0 & x \\ \end{bmatrix} $$
Can I expect the following to be true for all cases (where the initial matrix is square symmetric and invertible) ?
- The expanded matrix is invertible, and
- The terms of the inverse of the expanded matrix remain unchanged (i.e., say the original matrix is $A$ and the expanded matrix is $B$. If we reduced the rank of $B^{-1}$ by removing the same rows and columns that we added to $A$ to turn it into $B$, then we must have $A^{-1} = removeExtraRowAndCol(B^{-1})$)
Assume $A$ is $n\times n$ and invertible. Assume we insert a new $r^{th}$ row/column which each have zeros except in the $r,r$ position, where there is a $1$. Let $E$ be the $(n+1)\times (n+1)$ matrix which has $1$ on the main diagonal, except the $r^{th}$ row has a $1$ in the $n+1^{st}$ column and the $n+1^{st}$ row has a $1$ in the $r^{th}$ column. Then $EE=I$ and $$EBE=\begin{pmatrix} A & 0 \\ 0 & 1\end{pmatrix},$$ where $A$ is your starting matrix, $B$ is your expanded matrix, and the zeros are $n\times 1$ and $1\times n$ block matrices of all zeros. That's because multiplication on the right by $E$ just permutes the $r^{th}$ and $n+1^{st}$ columns, and multiplication on the left by $E$ just permutes the $r^{th}$ and $n+1^{st}$ rows.
If we find $C=(EBE)^{-1}$, then since $E=E^{-1}$, $$ECEB=(ECEB)(EE)=EC(EBE)E=EIE=EE=I.$$ So $B^{-1}=ECE$. This is a general fact about conjugacy: Assuming all matrices involved are invertible, the inverse of $PDP^{-1}$ is $PD^{-1}P^{-1}$.
So if we prove what we want for $C$, we can deduce what we want for $B$ by just permuting rows/columns back. So we've reduced it to the situation in which the row/column we've added is the last row/column. This situation is easier to see that the inverse of $$\begin{pmatrix} A & 0 \\ 0 & 1\end{pmatrix}$$ is $$\begin{pmatrix} A^{-1} & 0 \\ 0 & 1\end{pmatrix}.$$