I need to compute the inverse of a matrix sum $A+D$, where the inverse of $A\in\mathbb{R}^{n\times n}$ is known. The matrix $D\in\mathbb{R}^{n\times n}$ is a diagonal matrix which can be thought of as a perturbation to $A$.
Assuming that $A+D$ is still full-rank, is there any efficient way to compute $(A+D)^{-1}$ using the known quantity $A^{-1}$?
Continuing JimmyK4542's answer, we could write the following two relationships using the Woodbury Matrix identity:
$$(A+D)^{-1} = A^{-1}-A^{-1}(D^{-1}+A^{-1})^{-1}A^{-1}$$
$$ (D^{-1}+A^{-1})^{-1} = D - D(A+D)^{-1}D $$
In the second, we've treated $D^{-1}$ as large and $A^{-1}$ as the perturbation, which is sensical if $D$ was small to begin with. Plugging the second into the first, I get
$$(A+D)^{-1} = A^{-1}-A^{-1}(D - D(A+D)^{-1}D)A^{-1}$$
Expanding
$$(A+D)^{-1} = A^{-1}-A^{-1}DA^{-1} + A^{-1}D(A+D)^{-1}DA^{-1}$$
Now if $D$ is a small perturbation, we approximate $(A+D)^{-1}\approx A^{-1}$ on the right hand side only, and we get the approximation
$$(A+D)^{-1} \approx A^{-1}-A^{-1}DA^{-1} + A^{-1}DA^{-1}DA^{-1}$$
If we don't approximate, but use the first Woodward identity again, we get
$$ (A+D)^{-1} = A^{-1}-A^{-1}DA^{-1} + A^{-1}D(A^{-1}-A^{-1}(D^{-1}+A^{-1})^{-1}A^{-1})DA^{-1} $$
And expanding again
$$ (A+D)^{-1} = A^{-1}-A^{-1}DA^{-1} + A^{-1}DA^{-1}DA^{-1} -A^{-1}DA^{-1}(D^{-1}+A^{-1})^{-1}A^{-1}DA^{-1} $$
If $D$ is a small perturbation, we can approximate $(A^{-1}+D^{-1})^{-1}\approx D$ on the right hand side only, and we get the approximation
$$ (A+D)^{-1} \approx A^{-1}-A^{-1}DA^{-1} + A^{-1}DA^{-1}DA^{-1} -A^{-1}DA^{-1}DA^{-1}DA^{-1} $$
I claim continuing the procedure leads to an $N$th order approximation of
$$ (A+D)^{-1} \approx A^{-1}\sum_{n=0}^N (-1)^{n} (DA^{-1})^n, $$
which is the $N$th Taylor polynomial expansion of $(A+D)^{-1}$ about $D\approx \mathbf{0}$ (boldface zero means the zero matrix). Intuitively, the expression is convergent if $(DA^{-1})^n\rightarrow \mathbf{0}$ "fast enough" as $n\rightarrow\infty$. Non-rigorously thinking about this, I find that iterating powers of a matrix like this will make the entries get bigger and bigger if any eigenvalues of $(DA^{-1})$ are larger than $1$, giving a divergent summation. If all the eigenvalues of $(DA^{-1})$ are smaller than $1$, I would expect the terms in the sum to get smaller and smaller, contributing less and less to the result, and giving a convergent result. In this case I would expect the error to be bounded by term after the one you truncate, giving you an iterative way to know when to stop, e.g., calculate term by term and sum up until the contribution is less than $10^{-16}$ in all entries of the matrix (for double precision in a computer).
The ability for you to use the above technique then hinges on the eigenvalues of $(DA^{-1})$. I hope they are small enough!