Something puzzling appeared to me, and it would be very helpful to prove that it is true, since I can speed up my code! I guess it is simple with the right trick, perhaps? Maybe it also already has a name, for reference?
Let $\vec u = (a, a, ..., a)^T \in \mathbb R ^p$, and let $K$ be a positive definite matrix in $\mathbb R ^{(p-1) \times (p-1)}$. I am interested in computing the matrix $$ A= \left( \vec u \vec u ^T + \begin{pmatrix} 0 & \vec 0^T \\ \vec 0 & K^{-1} \end{pmatrix} \right) ^{-1} $$ and is seems that the answer is of the form $$ A = \begin{pmatrix} c & \vec d^T \\\vec d & K \end{pmatrix} $$ i.e. the matrix $K$ remains untouched in its place (only inverted of course), which to me was quite positively surprizing. My questions are:
- can this be shown to always hold? How? Seems like we need some special form of SMW-formula (?)
- what is then $c, \vec d$?
(I came accross this as I was debugging some code for computing posterior covariance matrices, and was wondering if there is a way to prove it)
Let me write $\mathbf{1} = (1, \ldots, 1)^{\top}$ so that
$$ \mathbf{u} = a \mathbf{1} \qquad\text{and}\qquad \mathbf{A}^{-1} = a^2 \mathbf{1}\mathbf{1}^{\top} + \begin{bmatrix} 0 & \mathbf{0}^{\top} \\ \mathbf{0} & \mathbf{K}^{-1} \end{bmatrix}. $$
We start with the ansatz that $A^{-1}$ assumes the suggested form,
$$ \mathbf{A} = \begin{bmatrix} c & \mathbf{d}^{\top} \\ \mathbf{d} & \mathbf{K} \end{bmatrix}, $$
and then plug this to $\mathbf{A}\mathbf{A}^{-1} = \mathbf{I}$ to obtain the equation
\begin{align*} \mathbf{I} = \mathbf{A}\mathbf{A}^{-1} &= a^2 \begin{bmatrix} c + \mathbf{d}^{\top}\mathbf{1} \\ \mathbf{d} + \mathbf{K}\mathbf{1} \end{bmatrix} \mathbf{1}^{\top} + \begin{bmatrix} 0 & \mathbf{d}^{\top}\mathbf{K}^{-1} \\ \mathbf{0} & \mathbf{I} \end{bmatrix} .\tag{1} \end{align*}
(Here, I am using $\mathbf{1}$ to refer to a vector of ones in any dimensions since its dimensionality can be easily inferred from the context.) Comparing the first row of both sides of $\text{(1)}$, we obtain the set of equations
\begin{gather*} a^2(c + \mathbf{d}^{\top}\mathbf{1}) = 1, \\ a^2(c + \mathbf{d}^{\top}\mathbf{1})\mathbf{1}^{\top} + \mathbf{d}^{\top}\mathbf{K}^{-1} = \mathbf{0}. \end{gather*}
Solving this equation gives
$$ c = \frac{1}{a^2} + \mathbf{1}^{\top}\mathbf{K}\mathbf{1} \qquad\text{and}\qquad \mathbf{d} = -\mathbf{K}\mathbf{1}, $$
which coincides with @user1551's answer. Although these values are obtained from the assumption on the shape of $\mathbf{A}$, the logic can be reversed to give a legitimate proof that
$$ \mathbf{A} = \begin{bmatrix} \frac{1}{a^2} + \mathbf{1}^{\top}\mathbf{K}\mathbf{1} & (-\mathbf{K}\mathbf{1})^{\top} \\ -\mathbf{K}\mathbf{1} & \mathbf{K} \end{bmatrix} $$
is indeed the correct form of $\mathbf{A}$ (by verifying that this satisfies the equation $\mathbf{A}\mathbf{A}^{-1} = \mathbf{I}$).