Intuitive comparison of change of basis to SVD for any linear transformation

976 Views Asked by At

Linear transformation T with a transformation matrix A can be represented under a change of basis as Linear transformation with matrix D such that $$ A = C D C^{-1} $$ where C is change of basis matrix

D matrix here may or may not be diagonalizable depending on A. Its diagonalizable incase of eigenvector decomposition when various conditions are met. I am not sure if there is a restriction on A for this to be performed i.e if this can be done only if A is square matrix?

On the other hand, SVD of A gives $$ A = U \Sigma V^{T} $$

What throws me off is $V^{T}$. Shouldn't $V^{T}$ be related to $U$ just like $C^{-1}$ is related to $C$ in change of basis approach. Is the difference because $U$ may not be basis but of a different dimension since A need not be square. Incase if $A$ is square, is SVD $V^{T}$ same as $C^{-1}$ intuitively?

Any other comparison?

2

There are 2 best solutions below

2
On BEST ANSWER

I looked further into details of change of basis process. The equation that i wrote for change of basis as given below only follows when T is transformation from $ V -> V $ linear subspace i.e. maps to itself. $$ A = C D C^{-1} $$

More generic representation of change of basis is $$ A = E D C^{-1} $$ which gives a transformation from $V$ to $W$ linear subspace and A can be non square matrix. This is equivalent to SVD in the sense that E is rotation, D is stretching matrix and C is another rotation. Now D need not be diagonal for random selection of E and C basis. However, if they are orthonormal, as in the case of SVD, that $ \sum $ becomes a diagonal matrix with singular values.

0
On

Let's play with this a bit.

Let $ A \in \mathbb{R}^{m \times n} $. Then we know it has a singular value decomposition $ A = U \Sigma V^T $ where $ U \in \mathbb{R}^{m \times m} $ and $ V \in \mathbb{R}^{n \times n} $ are orthogonal ($ U^T U = U U^T = I $ and $ V^T V = V V^T = I $) and $ \Sigma $ has nonnegative diagonal elements, ordered from largest to smallest.

Now consider $ y = A x $.

$ x = I x = V V^T x $. So, $ V^T x $ gives you the coefficients of $ x $ when using the columns of $ V $ as the basis.

$ y = I y = U U^T y $. So, $ U^T y $ gives you the coefficients of $ y $ when using the columns of $ U $ as the basis.

So, $ U U^T y = A V V^T x $. Equivalently, $ (U^T y) = (U^T A V) (V^T x) = \Sigma (V^T x) $.
So, if you view $ x $ in the right orthonormal basis (given by the columns of $ V $) and $ y $ in the right orthonormal basis (given by the columns of $ U $), then the (non-square) matrix that relates them becomes diagonal.

A truly beautiful result.