Moore-Penrose pseudoinverse as a limit

137 Views Asked by At

For any matrix $A \in \mathbb{C}^{m \times n}$, there exists a unique matrix $A^{+}$ such that:

$$A^{+} A = \left( A^{+} A \right)^{*}, \qquad A A^{+} = \left( A A^{+} \right)^{*}, \qquad AA^{+}A=A, \qquad A^{+}AA^{+}=A^+$$

I have to prove that, for $\varepsilon>0$:

$$ A^{+} = \lim_{\varepsilon \to 0^{+}} A^{*} \left( A A^{*} + \varepsilon I_{m} \right)^{-1}$$


My idea is to prove that the limit expression satisfies the $4$ conditions. Using the fact that $AA^{*}+\varepsilon I_{m}$ is a hermitian matrix and $(AA^{*}+\varepsilon I_{m})^{-1}=\left((AA^{*}+\varepsilon I_{m})^{-1}\right)^*$ I was able to prove the first two conditions, but for the last two I can't think of how to do it. Any idea how to do it?

Pd: When I test $A^{+}A=(A^{+}A)^{*}$, can I put the limit inside the * operator every time? $$\lim_{\varepsilon\to 0^+}\left(A^{*}(AA^{*}+\varepsilon I_{m})^{-1}A\right)^*=\left(\lim_{\varepsilon\to 0^+}A^{*}(AA^{*}+\varepsilon I_{m})^{-1}\cdot A\right)^{*}$$

2

There are 2 best solutions below

0
On

You can do it by singular value decomposition $$ A = U\Sigma V^*, $$ where $U$ is a $m\times m$ unitary, $V$ is a $n\times n$ unitary and $\Sigma$ is a $m\times n$ positive rectangular diagonal matrix i.e. $$ \Sigma = \begin{bmatrix} s_1& 0&\cdots& 0\\ 0&s_2&\cdots &0\\ \vdots& \vdots& \ddots&\vdots\\ 0&0&\cdots& s_n\\ \vdots&\vdots&\vdots&\vdots\\ 0&\cdots&\cdots&0 \end{bmatrix},\qquad s_k\ge0. $$ We can then carry out your program $$ AA^{*}(AA^{*}+\varepsilon I_{m})^{-1}A= U\Sigma\Sigma^*(\Sigma\Sigma^*+\epsilon I_m)^{-1}\Sigma V^* $$ As all the matrices sandwiched between the unitaries are diagonal, one easily finds $$ \Sigma\Sigma^*(\Sigma\Sigma^*+\epsilon I_m)^{-1} = \operatorname{diag}\left( \frac{s_1^2}{s_1^2+\epsilon},\frac{s_2^2}{s_2^2+\epsilon},\ldots,\frac{s_n^2}{s_n^2+\epsilon},0,\ldots,0\right). $$ For $s_k\neq 0$ the limit for that entry as $\epsilon\rightarrow 0$ is 1 and for $s_k=0$ it is 0. This implies that $$ \lim_{\epsilon\rightarrow 0}\Sigma\Sigma^*(\Sigma\Sigma^*+\epsilon I_m)^{-1}\Sigma=\Sigma, $$ and so we are done.

0
On

Let the SVD of ${\bf A} \in \mathrm C^{m \times n}$ be

$$ {\bf A} = {\bf U} {\bf \Sigma} {\bf V}^* = \begin{bmatrix} {\bf U}_1 & {\bf U}_2 \end{bmatrix} \begin{bmatrix} {\bf \Sigma}_1 & {\bf O} \\ {\bf O} & {\bf O}\end{bmatrix} \begin{bmatrix} {\bf V}_1^*\\ {\bf V}_2^*\end{bmatrix} $$

where ${\bf \Sigma}_1$ is $r \times r$, where $r := \operatorname{rank} ({\bf A})$, and invertible. Hence,

$$ \begin{aligned} {\bf A}^+ &= \lim_{\varepsilon \to 0^{+}} {\bf A}^* \left( {\bf A} {\bf A}^* + \varepsilon \, {\bf I}_m \right)^{-1} \\ &= \lim_{\varepsilon \to 0^{+}} \begin{bmatrix} {\bf V}_1 & {\bf V}_2 \end{bmatrix} \begin{bmatrix} {\bf \Sigma}_1 \left( {\bf \Sigma}_1^2 + \varepsilon \, {\bf I}_r \right)^{-1} & {\bf O} \\ {\bf O} & {\bf O}\end{bmatrix} \begin{bmatrix} {\bf U}_1^*\\ {\bf U}_2^*\end{bmatrix} = \color{blue}{\begin{bmatrix} {\bf V}_1 & {\bf V}_2 \end{bmatrix} \begin{bmatrix} {\bf \Sigma}_1^{-1} & {\bf O} \\ {\bf O} & {\bf O}\end{bmatrix} \begin{bmatrix} {\bf U}_1^*\\ {\bf U}_2^*\end{bmatrix}} \end{aligned} $$