Why is the transformation not unique if eigen values are repeated or zero

398 Views Asked by At

I am using the formula here to computer transformation between two co-ordinate systems in my 3D game (2 sets of same number of points with co-relation). http://www.ltu.se/cms_fs/1.51590!/svd-fitting.pdf

I cant understand these lines on page 2:

The solution to problem (2) is unique if $\sigma_2(C)$ is nonzero. (More strictly, the solution is not unique if $\sigma_2(C)=0$ or if $\sigma_2(C) = \sigma_3(C)$ and $\det(UV^T=-1$).

How do these conditions work? How would you explain this to a layman. Can anyone point me in the correct direction to understand this?

Is it because I dont have enough equations to solve for as many unknowns if my eigen values are zero or same?

Dont I need 5 equations to solve form translation and rotation around 3 axes?

I think the orthogonal procrustes is solving for rotation so minimum 2 equations? And if this is correct why does det =-1 important?

2

There are 2 best solutions below

1
On

You can thing of a 3D linear transform as one that maps a sphere to an ellipsoid. The length of the ellipsoid axis are given by the Eigenvalues (think of the diagonalization) and the axis directions by the Eigenvectors.

If two Eigenvalues are equal, the ellipsoid is of revolution and can rotate freely around the other axis.

If an Eigenvalue is zero, the ellipsoid is flat and is missing an axis.

0
On

The linear transformation of a vector $v$ is given by a matrix-vector multiplication $Tv$, that is $T_{n\times n}$ is the linear map. This can be expressed as $$\begin{bmatrix} |&|&\cdots&| \\ T_{1\downarrow} &T_{2\downarrow}&\cdots& T_{n\downarrow} \\ |&|&\cdots&| \\ \end{bmatrix} \begin{bmatrix} v_1\\v_2\\\vdots\\v_n \end{bmatrix} =v_1 T_{1\downarrow}+v_1 T_{2\downarrow}\cdots+v_1 T_{n\downarrow} $$

A matrix AA is singular if and only if 00 is an eigenvalue. Therefore if you got a (at least one) zero eigenvalue then your linear map is of lower rank, that is it is equivalent to a matrix with a (at least one) zero column, so $T:\mathbb{R}^n\to \mathbb{R}^m$ where $m<n$ and therefore it has no unique representation in $\mathbb{R}^n$ - a degree of freedom.

Where a matrix has a repeated eigenvalue, say $\lambda_1=\dots\lambda_k, k<n$. One says it is one of an algebraic multiplicity $k$. The sum of algebraic multiplicities should be equal $n$, but not necessarily the number of eigenvalues. An eigenvalue of an algebraic multiplicity $k$ may have $\le k$ eigenvectors . The number of eigenvectors of an eigenvalue is called geometric multiplicity of that eigenvalue. When the geometric multiplicity of an eigenvalue is equal to its algebraic multiplicity there is no problem. When the geometric multiplicity is less then algebraic multiplicity the matrix\transformation is not diagonalizable, that is the basis of eigenvectors is missing a vector(s). This\these missing vector(s) can be added - and called generalized eivenvector, but there is no unique way to do it - thus a degree of freedom and, of course, non uniqueness.

Note that due to orthogonality\unitarity of $UU^T=I$, $VV^T=I$ and $(UV^T)(VU^T)=I$ which implies $det(U)=det(U^T)=\pm1$, $det(V)=det(V^T)=\pm1$ and $det(UV^T)=\pm1$ regardless of algebraic multiplicity of eigenvalues. So if $det(UV^T)=-1$ we have $det(U)=-det(V)=\pm1$

Now the question is why they have different sign when there is a an eigenvalue that has a geometric multiplicity lower then its algebraic multiplicity. The answer is hidden in $det U \Sigma V^t$, see here.