Perturbation Theory: Approximation of an inverse of a matrix

3.1k Views Asked by At

Does someone know theorems about approximating the inverse of a matrix through perturbation theory? I would be very grateful, if you could recommend me some literature on that.

Because I am reading a paper which says that given a matrix A which is quadratic, real-valued and consists of eigenvectors from another matrix B, then the following approximation holds for A because of perturbation theory techniques (and unfortunately they don't say in detail which techniques):

$A^{-1}=(I+H)^{(−1)} \approx (I-H) $, where I is the identity matrix. The equality is okay (I calculated it) but I don't understand why the approximation holds.

1

There are 1 best solutions below

7
On BEST ANSWER

It is just Taylor expansion: $$\tag{1}(I+\epsilon K)^{-1} = 1-\epsilon K + O(\epsilon^2), $$ so if $H:=\epsilon K$ is small with respect to the identity matrix then you can neglect the quadratic term in the expansion. The proof of (1) is based on Neumann's sums: $$ (I+\epsilon K)^{-1}= \sum_{n=0}^\infty (-1)^n\epsilon^n K^n, $$ for all $0\le |\epsilon|<|\lambda|^{-1}$, where $\lambda $ is the dominant eigenvalue of $K$, that is, the eigenvalue that is biggest in modulus. The expansion (1) is just the Neumann sum truncated to the first two summands.