Recover the orthogonal matrix U in SVD

332 Views Asked by At

I'm trying to compute the SVD of a non-square $m\times n$ matrix ($m>n$), and I'm following Vini's suggestions from this question: SVD for Non-Square matrices?.

Step 1: Reduce the $m \times n$ matrix $A$ to the triangular form by QR-factorization. That is, $A = QR$ where $R$ is a $n \times n$ (upper) triangular matrix. Step 2: Reduce the matrix $R$ to the bidiagonal matrix $B$ using orthogonal transformations. $U^tRV = B$ where $U^tU = V^tV = I$. Step 3: Compute the SVD of the bidiagonal matrix $B$ using any standard method. These include, (a) QR-algorithm, (b) bisection and (c) divide and conquer.

I was able to reduce the matrix to the upper bidiagonal form and then decompose $B$ into $B = USV^T,$ where $U_1,V_1 \in \mathbb R^{n\times n}$ are orthogonal matrices and $S \in \mathbb R^{n\times n}$ is a diagonal matrix with singular values on the diagonal. But our goal was to decompose $A$ into $A = USV^T,$ where $U\in \mathbb R^{m\times m}$, $S \in \mathbb R^{m\times n}$, $V \in \mathbb R^{n\times n}.$ How do we recover the original orthogonal matrix $U$?

3

There are 3 best solutions below

0
On BEST ANSWER

Following your steps:

Step $1$: $A=QR$ where $Q \in \mathbb{R}^{m \times n}, R\in \mathbb{R}^{n \times n}$.

Step $2$: $U_1^TRV_1=B$, where $U_1 \in \mathbb{R}^n, V_1 \in \mathbb{R}^{n \times n}, B \in \mathbb{R}^{n\times n}$

Step $3$: $B=U_2S_2V_2^T, U_2\in \mathbb{R}^{n \times n}, S_2\in \mathbb{R}^{n \times n}, V_2 \in \mathbb{R}^{n \times n}$.

Combining them together, we have

$$A=QR=Q(U_1BV_1^T)=QU_1(U_2S_2V_2^T)V_1^T=(QU_1U_2)S_2(V_2^TV_1^T)$$

At this point of time, we have $QU_1U_2 \in \mathbb{R}^{m \times n}, S_2 \in \mathbb{R}^{n \times n}, V \in \mathbb{R}^{n \times n}$.

Depends on your intention, this could have accomplished what you want.

However, suppose you want to find $U \in \mathbb{R}^{m \times m}$ and $S \in \mathbb{R}^{m \times n}$.

We can let $$U = \begin{bmatrix} QU_1U_2 & Q_2 \end{bmatrix}\in \mathbb{R}^{m \times m}, S = \begin{bmatrix} S_2 \\ 0_{(m-n) \times n}\end{bmatrix} \in \mathbb{R}^{m \times n}$$

where columns of $Q_2 \in \mathbb{R}^{m \times (m-n)}$ forms an orthonormal basis of the nullspace of $(QU_1U_2)^T$.

That is $Q_2^TQ_2=I_{(m-n) \times (m-n)} $ and $(QU_1U_2)^TQ_2=0$.

Note that in matlab, an orthonormal basis for the nullspace can be found by the command null.

0
On

I can't speak exactly to how Matlab does it, but the standard way of computing the SVD is to recognize for any matrix $A$ of size $m\times n$ that the matrices $AA^T$ and $A^TA$ are both square and symmetric positive semi-definite. $AA^T$ is $m\times m$ while $A^TA$ is $n\times n$. We also see that because these matrices are symmetric the spectral theorem allows to find an orthogonal decomposition:

$$ AA^T \;\; =\;\; UDU^T \hspace{2pc} A^TA \;\; =\;\; VEV^T. $$

What we find though is that the singular value decomposition is constructed from these matrices above. $A = U\Sigma V^T$ where $U$ comes from the spectral decomposition of $AA^T$, $V$ comes from the spectral decomposition of $A^TA$ and since both the matrices $E$ and $D$ have the same elements (call them $\lambda_i$) we can construct $\Sigma$ by placing $\sqrt{\lambda_i}$ along the main diagonal of an $m\times n$ matrix.

In short, you find $U$ by diagonalizing $AA^T$.

0
On

The function svd in MATLAB very probably uses the DGESVD routine of LAPACK and it is (again, probably) the Intel MKL implementation.

What it basically does is the following:

  1. Compute the QR factorization of $A$: $A=QR$.
  2. Transform R to a bidiagonal form: $R=U_1BV_1^T$.
  3. Compute the SVD of $B$: $B=U_2SV_2^T$.

The implementation at netlib uses DBDSQR, which implements the zero-shift QR algorithm.

Then we have $$ A=QR=QU_1BV_1^T=QU_1U_2SV_2^TV_1^T=USV^T $$ with $U:=QU_1U_2$ and $V:=V_1V_2$.