I was playing with simulations of Euler's equations of rotation in this question. This involves integrating an ordinary differential equation of a rotation matrix, $R$, which is calculated for all of its nine elements (3 by 3). Due to numerical errors this will probably cause that $R$ does not remain orthogonal. So I wish to explore the possibilities to negate disturbances as much as possible by normalizing $R$ to the "closest" orthogonal matrix.
Two simple ways would be to take the three columns or rows of $R$ as start point for the orthogonal basis vectors $\vec{v}_1$, $\vec{v}_2$ and $\vec{v}_3$. Either use Gram-Schmidt orthonormalization
$$ \vec{u}_1 = \frac{\vec{v}_1}{\|\vec{v}_1\|}, $$
$$ \vec{u}_2 = \frac{\vec{v}_2 - \left(\vec{v}_2 \cdot \vec{u}_1\right) \vec{u}_1}{\|\vec{v}_2-\left(\vec{v}_2\cdot\vec{u}_1\right)\vec{u}_1\|}, $$
$$ \vec{u}_3 = \frac{\vec{v}_3 - \left(\vec{v}_3 \cdot \vec{u}_1\right) \vec{u}_1 - \left(\vec{v}_3 \cdot \vec{u}_2\right) \vec{u}_2}{\|\vec{v}_3 - \left(\vec{v}_3 \cdot \vec{u}_1\right) \vec{u}_1 - \left(\vec{v}_3 \cdot \vec{u}_2\right) \vec{u}_2\|}; $$
or use cross products (assuming that $R$ does not contain reflections and that the basis vectors are right handed)
$$ \vec{u}_1 = \frac{\vec{v}_2 \times \vec{v}_3}{\|\vec{v}_2 \times \vec{v}_3\|}, $$
$$ \vec{u}_2 = \frac{\vec{v}_3 \times \vec{u}_1}{\|\vec{v}_3 \times \vec{u}_1\|}, $$
$$ \vec{u}_3 = \vec{u}_1 \times \vec{u}_2, $$
and construct the normalized rotation matrix, $\hat{R}$, from $\vec{u}_1$, $\vec{u}_2$ and $\vec{u}_3$.
However these methods have the disadvantage that the construction of $\hat{R}$ is biased by the order of operations. For example rotating the order of the starting vectors to $\vec{v}_2$, $\vec{v}_3$ and $\vec{v}_1$ and constructing $\hat{R}$ with $\vec{u}_3$, $\vec{u}_1$ and $\vec{u}_2$ yields most likely a different matrix. The cross product method is slightly less asymmetrical than the Gram-Schmidt method, but is still biased.
In an answer to the linked question a method is mentioned which uses the logarithm and exponent of matrices to find the nearest orthogonal matrix
$$ \hat{R} = \exp\left(\frac12\left(\log R - (\log R)^T\right)\right). $$
However I also came across polar decomposition, such that $R$ can be written as the product of a rotation $U$ and a scaling $P$, $R=UP$. The normalized rotation matrix $\hat{R}$ can then be defined as $U$, which can be written as
$$ \hat{R} = R \left(\sqrt{R^T R}\right)^{-1}. $$
Or use the singular value decomposition, which seems to be cheaper to calculate numerically, where
$$ R = U D V^T, $$
$$ \hat{R} = U V^T. $$
Which of these two methods gives a better normalized matrix, because I did a few tests with random matrices and got different results from the two methods? I believe that the logarithmic and exponential method requires two decompositions, because you add two matrices within the exponent after which I don't think that the decomposition from the logarithms can be reused for the exponent. While the polar or SVD method only requires one decomposition, thus probably could be calculated in less time.
I also tried to calculate it by converting $R$ to a quaternion, normalize it and convert back to a rotation matrix. But the results seemed to be even worse than the Gram-Schmidt method.
One other thing I tested is the following iterative method
$$ \hat{R}_{n+1} = \frac32 \hat{R}_n - \frac12 \hat{R}_n \hat{R}_n^T \hat{R}_n, $$
with $\hat{R}_0=R$. This is based on $\hat{R}_{n+1} = \frac12 \left(\hat{R}_n + \left(\hat{R}_n^T\right)^{-1}\right))$ and using Newton's method to approximate the inverse starting with $\hat{R}_n$ as initial guess.
One iteration, when computing it in Matlab, is faster than the polar/SVD, however after two iteration the computation times where similar, but the iterative method would probably still have an error. However a single iteration might still be a viable option to suppress disturbances between iterations of the simulation of Euler's equations.
PS: I also looked simulating directly with quaternions, but I was just curious about normalizing disturbed rotation matrices.