The inverse of the covariance matrix for a multivariate normal random vector can be thought of as holding some measure of conditional dependence between two of the variables in the vector.
Consider we have an inverse covariance matrix $\Sigma^{-1}$ and we want to build approximations to the actual covariance $\Sigma$ based on different length conditional paths between random variables in our random vector. My idea was to write the inverse as $D - X$ giving the following,
$$ \Sigma = (D - X)^{-1} = D^{-1/2}(I - D^{-1/2}XD^{-1/2})^{-1}D^{-1/2}$$
where $D$ is a diagonal matrix and $X$ is a matrix with 0s on the diagonal. Hence we have
$$ D^{1/2} \Sigma D^{1/2} = (I - D^{-1/2}XD^{-1/2})^{-1} = \sum_{k=0}^{\infty} (D^{-1/2}XD^{-1/2})^k $$
by infinite expansion of the series. Now it appeared to me that $(D^{-1/2}XD^{-1/2})^k$ would be some measure of the covariance along paths of size length $k$.
This seemed to intuitively make sense to me, however it seems that $D^{-1/2}XD^{-1/2}$ can have eigenvalues with absolute value greater than 1, hence the series does not converge!
Am I missing something or is my intuition incorrect that $\Sigma$ can be built from $\Sigma^{-1}$ by considering the different length paths between the conditional dependent varaibles.
Note
I realise by symmetry the same argument can be made for building $\Sigma^{-1}$ from $\Sigma$ but this doesn't have the same graphical (conditional dependence) intuition to me.