Computing the pseudoinverse of a $2 \times 2$ matrix

1.3k Views Asked by At

If $A=\left(\begin{array}{cc}1&1\\1&1\end{array}\right)$. How to calculate and prove that the Moore Penrose pseudoinverse of $A$ is equal to $\left(\begin{array}{cc}\frac{1}{4}&\frac{1}{4}\\\frac{1}{4}&\frac{1}{4}\end{array}\right)$? And thank you very much.

2

There are 2 best solutions below

5
On BEST ANSWER

Your $A$ is selfadjoint, so the Moore-Penrose inverse will be simply the inverse of the restriction to the orthogonal complement of the kernel. You have $$ A= 2 P_1 + 0\,(I-P_1), $$ where $P_1=\begin{bmatrix} 1/2&1/2\\ 1/2&1/2\end{bmatrix}$. On the range of $P_1$, $A$ acts as multiplication by $2$. So $$ A^+=\frac12\,P_1=\begin{bmatrix} 1/4&1/4\\ 1/4&1/4\end{bmatrix}. $$


When $A$ is not selfadjoint, we usually define $A^+=A^*(AA^*)^+$ (which is equal to $(A^*A)^+A^*$).

0
On

One way is to find a full rank decomposition of $A$, say $A=BC$, where $B$ is left invertible (rank equal to the number of columns) and $C$ is right invertible (rank equal to the number of rows). This can be accomplished in various ways: LU decomposition, QR decomposition or SVD decomposition.

For the LU decomposition, we write $A$ as the product $P^TLU$, where $P$ is a permutation matrix, $L$ is lower triangular and $U$ is in row echelon form. In this case $$ A=\begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix} $$ Then we get $C$ by removing the zero row from the row echelon form matrix and $B$ from the lower triangular matrix by removing the last column (in general, remove the last $k$ columns, where $k$ is the number of zero rows).

When $B$ is left invertible, it can be shown that its pseudoinverse $B^+=(BB^T)^{-1}B^T$; if $C$ is right invertible, then $C^+=C^T(C^TC)^{-1}$ and also $A^+=C^+B^+$ (you find the theory on several textbooks).

In this case $B=\begin{bmatrix}1\\1\end{bmatrix}$, so $BB^T=[2]$ and $$ B^+=\frac{1}{1}\begin{bmatrix} 1 & 1 \end{bmatrix} $$ Similarly, $C=\begin{bmatrix} 1 & 1 \end{bmatrix}$, so $C^TC=[2]$ and $$ C^+=\frac{1}{2}\begin{bmatrix}1\\1\end{bmatrix} $$ Thus $$ A^+=C^+B^+= \frac{1}{4}\begin{bmatrix}1\\1\end{bmatrix} \begin{bmatrix}1 & 1\end{bmatrix}= \frac{1}{4}\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix} $$

With the singular value decomposition $A=U\Sigma V^H$, where $U$ and $V$ are unitary, we have $$ A^+=V\Sigma^+U^H $$ where $\Sigma^+$ is simply the pseudodiagonal matrix (diagonal if $A$ is square) obtained by inverting the nonzero singular values.