For two real and (entrywise) positive $n\times n$ matrices $\mathbf{A}$ and $\mathbf{D}$, where $\mathbf{A}$ is symmetric and $\mathbf{D}$ is diagonal, I'd like to evaluate: $$ f= \mathrm{tr}\left( (\mathbf{D} + \mathbf{A})^{-1}\mathbf{A} \right).\tag{1} $$
Is there some clever way to approach this which can avoid directly evaluating the matrix inverse $(\mathbf{D} + \mathbf{A})^{-1}$? In my case $n$ is large $(\approx10^5)$ and $\mathbf{A}$ is fairly sparse, so calculating the inverse of $\mathbf{A}+\mathbf{D}$ is difficult.
By the way, we may rewrite $f$ as follows: \begin{align} f&= \mathrm{tr}\left( (\mathbf{D} + \mathbf{A})^{-1}(\mathbf{D}+\mathbf{A}-\mathbf{D}) \right)\\ &= n-\mathrm{tr}\left( (\mathbf{D} + \mathbf{A})^{-1}\mathbf{D} \right)\\ &= n-\mathrm{tr}\left( \mathbf{D}^{1/2}(\mathbf{D} + \mathbf{A})^{-1}\mathbf{D}^{1/2} \right)\\ &= n-\mathrm{tr}\left( (I+\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2})^{-1} \right).\tag{2} \end{align} So, we can also obtain the trace by solving the symmetric eigenvalue problem for $I+\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}$. But when $\mathbf{A}$ is symmetric, large and sparse, is the eigenvalue problem more tractable than the matrix inversion problem?
At this point it is possible to combine the comments and present at least a first solution which requires mainly sparse matrix vector solves.
In principle, we can compute the trace of $A^{-1}$ using the formula $$ \text{tr}(A^{-1}) = \sum_{j=1}^n e_j^T A^{-1} e_j,$$ where $e_j$ is the $j$th column vector of the $n$ by $n$ identity matrix. We should solve the linear systems $$ Ax_j = e_j$$ and then compute $$ \text{tr}(A) = \sum_{j=1}^n e_j^T x_j$$ This approach is attractive if it is possible to compute one of the (sparse) factorization of $A$, say, $A = LU$. Then $$ \text{tr}(A) = \sum_{j=1}^n e_j^T A^{-1} e_j = \sum_{j=1}^n e_j^T U^{-1} L^{-1} e_j.$$ We observe that it suffices to solve the lower triangular linear systems $$ \quad U^T y_j = e_j \quad Lz_j = e_j $$ and compute $$ \text{tr}(A) = \sum_{j=1}^n y_j^T z_j.$$
Now, user1551's rewrite suggests that we should compute the trace of the matrix $$G = D^{\frac{1}{2}} (D + A)^{-1} D^{\frac{1}{2}}.$$ It is theoretically possible to factor the symmetric matrix $(D + A)$ as $$ D + A = LHL^T$$ where $L$ is lower triangular and $H$ is diagonal, but it need not be practical, due to fill-in during the factorization. In general, this is called an $LDL^T$ factorization, a square root free generalization of Cholesky's algorithm which applies to general symmetric matrices. It follows that $$ \text{tr}(G) = \sum_{j=1}^n e_j^T D^{\frac{1}{2}} L^{-T} H^{-1} L^{-1}D^{\frac{1}{2}} e_j.$$ As before, the task reduces to solving lower triangular linear systems, specifically $$ L w_j = D^{\frac{1}{2}} e_j,$$ applying a diagonal scaling $H^{-1}$, and doing inner products.
If fill-in is a major issue, then iterative methods must be used to solve the necessary linear systems.
From a practical perspective, the approach is attractive, because it offers a way of decoupling the calculation into a sequences of independent problems which can be solved in parallel with no need for communication between the processors.