Why does the inverse of a singular matrix plus a small-norm matrix have same columns/rows?

347 Views Asked by At

Question:

Given two matrices A and B, where A is a singular matrix and the sum of each row of A is zero, and B is an arbitrary matrix with $\left\| {\bf{B}} \right\| \ll \left\| {\bf{A}} \right\|$, why does the matrix ${({\bf{A}} + {\bf{B}})^{ - 1}}$ have nearly same rows?

Example: $${\bf{A}} = \left[ {\begin{array}{*{20}{r}}{2049.01}&{ - 2048.49}&{ - 282.38}&{281.85}\\{ - 1999.24}&{2799.19}&{ - 406.72}&{ - 393.23}\\{ - 164.13}&{ - 386.62}&{2563.68}&{ - 2012.93}\\{467.42}&{ - 290.21}&{ - 1828.58}&{1651.37}\end{array}} \right]$$

Note that the sum of each row of A is zero.

$${\bf{B}} = \left[ {\begin{array}{*{20}{r}}{0.8909}&{0.1493}&{0.8143}&{0.1966}\\{0.9593}&{0.2575}&{0.2435}&{0.2511}\\{0.5472}&{0.8407}&{0.9293}&{0.616}\\{0.1386}&{0.2543}&{0.35}&{0.4733}\end{array}} \right]$$

where B is a random small-normed matrix.

$${\bf{A}} + {\bf{B}} = \left[ {\begin{array}{*{20}{r}}{2049.901}&{ - 2048.34}&{ - 281.566}&{282.0466}\\{ - 1998.28}&{2799.448}&{ - 406.477}&{ - 392.979}\\{ - 163.583}&{ - 385.779}&{2564.609}&{ - 2012.31}\\{467.5586}&{ - 289.956}&{ - 1828.23}&{1651.843}\end{array}} \right]$$ $${({\bf{A}} + {\bf{B}})^{ - 1}} = \left[ {\begin{array}{*{20}{r}}{0.0336}&{0.0732}&{0.1794}&{0.2302}\\{0.0331}&{0.0732}&{0.1796}&{0.2305}\\{0.0321}&{0.0723}&{0.1804}&{0.2315}\\{0.0318}&{0.0722}&{0.1804}&{0.2321}\end{array}} \right]$$

It can be seen that ${({\bf{A}} + {\bf{B}})^{ - 1}}$ have nearly same rows.

What's the reason behind it? Can it be proved? Thanks very much!

2

There are 2 best solutions below

4
On

If each row of $A$ sums to zero, then as Rahul points out, the vector $\vec{e} = (1,...,1)$ is in the null space of $A$. Assume that the null space is 1-dimensional so it is in fact the span of $\vec{e}$.

One interesting though experiment is sort of considering a $B = 0$ type case: Pick a very small vector $\vec{y} \in col(A)$ , i.e.a vector such that $A\vec{x} = \vec{y}$ can be solved and such that $|\vec{y}| < \epsilon$. Then the solution to $A\vec{x} = \vec{y}$ is of the form (particular solution to inhomogeneous problem) + (general solution to homogeneous problem). The general solution to the homogeneous problem is of course just vectors of the form $(t,t,....,t) = t\vec{e}$, but a particular solution to $A\vec{x} = \vec{y}$ is of very small norm ($ \leq C\epsilon$) because $\vec{y}$ itself was very small. So for a typical small $\vec{y}$, the solution is nearly parallel to $\vec{e}$.

I think of your case like this: Now assume that $B$ is a matrix of very small norm and that $A + B$ is invertible. Write $\vec{y} = B\vec{e}$. Since $B$ is very small we know that $\vec{y}$ is very small and since $A\vec{e} = 0$, we know that $$ (A+B)\vec{e} = \vec{y}. $$ OK. So this means $$ (A+B)^{-1}\vec{y} = \vec{e}. $$

We'll make one more assumption, which is that the norm of $(A+B)^{-1}$ is not really big. So it's like at most 17 or something. This means that for all $\vec{z}$ which are close to $\vec{y}$, we expect $(A+B)^{-1}\vec{z}$ to be close to, i.e. nearly parallel to $\vec{e}$.

But also remember that $\vec{y}$ is really small, so it is close to zero. So in the previous paragraph I can take $\vec{z} = 0$.

So there is an open ball $B$ containing zero which $(A+B)^{-1}$ maps to a region very close to $\vec{e}$. But $(A+B)^{-1}$ is of course linear so for any $\vec{z}$ of not too big size, we can first scale it until it lies in $B$ and then apply $(A+B)^{-1}$ to get something near $e$ and then scale back to see that $(A+B)^{-1}\vec{z}$ lies fairly close to the line span$(e)$.

0
On

Consider SVD of $A \in \mathbb{R}^{n \times n}$. Let vector $\vec e = (1,...,1)$ be the only vector in the null space of $A$. Then the right singular vector, corresponding to the zero singular value is $\vec v = \vec e / \sqrt{n}$ (and the left is some $\vec u$).

After adding $B$ singular values and vectors change only a little. So our zero singular value becomes positive, but very small (let it be $\varepsilon$) and the right singular vector $\vec e / \sqrt{n}$ stays almost the same.

After the inverse of the matrix the singular values are also inversed, and right and left singular vectors are swapped. So $$ (A + B)^{-1} \approx \frac{1}{\varepsilon} \vec v \vec u^T $$ since all other singular values are now much smaller.

Finally, we remember that values of $\vec v$ are equal, so the rows of $(A + B)^{-1}$ are almost equal too.