There are many applications in applied mathematics where the SVD of a matrix comes in handy. For example, consider the problem where we want to find an approximate solution to a(n) (overdetermined) linear system of equations $A\vec{x}=0$ subject to $\|\vec{x}\|=1$. It can be shown that the last column of $V$ in $A = UDV^t$ is the solution to this system subject to $\|\vec{x}\| = 1$.
The problem is that the SVD of a matrix is not unique. $U$ and $V$ are orthonormal matrices but we still have freedom in choosing signs. In cases like this, how can I do something that resolves the ambiguity? Is there a trick or a natural convention for the signs that ensures I always get the same answer no matter what?
In particular, what can I do in MATLAB to avoid this ambiguity?
Edit:
A specific example where non-uniquess can become an issue:
Suppose that you want to measure the intrinsic parameters of a pinhole camera. It turns out that in practical cases, $K$, the calibration matrix, is an upper triangular matrix. A little bit of algebraic manipulation, gives us a symmetric matrix $B=K^{-t}K^{-1}$ which has a nice interpretation and it's sufficient for finding $K$. Anyway, to find $K$, you will take photos of a checkerboard pattern and generally, you arrive at a linear system like $A_{n \times 6} \times b_{6 \times 1}=0_{n \times 1}$ with the additional constraint that $\|\vec{b}\|=1$ (to avoid the trivial answer). Here are some numerical results that I have obtained for $B$ after running the algorithm on synthesized checkerboard photos:
\begin{bmatrix} 0.000000082076150 && -0.000000000008003 && -0.000037912894906 \\ -0.000000000008003 && 0.000000071823158 && -0.000055278469079 \\ -0.000037912894906 && -0.000055278469079 && 0.999999997753446 \\ \end{bmatrix} \begin{bmatrix} -0.000000104164118 && -0.000000000027239 && 0.000053428189068 \\ -0.000000000027239 && -0.000000091142604 && 0.000067709246949 \\ 0.000053428189068 && 0.000067709246949 && -0.999999996280434 \end{bmatrix} \begin{bmatrix} 0.000000109630415 && 0.000000000051933 && -0.000056355404234 \\ 0.000000000051933 && 0.000000095949976 && -0.000071441815015 \\ -0.000056355404234 && -0.000071441815015 && 0.999999995860057 \end{bmatrix} Since $K$ is intrinsic and it only depends on the structure of the camera itself, the values we find for $p_x$, $p_y$, $f_x$ and $f_y$ must remain close to each other. Unfortunately, using the above matrices, sometimes $p_x$ and $p_y$ are minus their actual values. I blame the sign inconsistency of SVD decomposition for this.