Consider the norm $\| A \| = \sqrt{\mathrm{tr}(AA^t)}$. It's easy to see that $\mathrm{SO}(3)$ is a compact subspace of $3 \times 3$ matrices in the topology induced by this norm because $\mathrm{O}(3)$ is compact and $SO(3)$ being the inverse image of $\{1\}$ under the map $\mathrm{det}$ is a closed subset of $\mathrm{O}(n)$. So, it makes sense to talk about the nearest rotation and the farthest rotation matrix from a given matrix. The former one, the nearest one, has been discussed online and I could find a lot of information about it by Googling. However, the farthest rotation matrix was not discussed. Out of curiosity, is it possible to find the farthest rotation matrix to a given matrix?
I tried to solve the problem using Lagrange multipliers but I didn't know how to proceed because I'm not good at matrix calculus.
Finding the farthest matrix is actually not that different from finding the nearest matrix. The same techniques are used. It's just the conclusion that is different.
In general, suppose $A\in M_n(\mathbb R)$ and we want to maximise or minimise the Frobenius norm $\|A-R\|_F$ subject to $R\in SO(n,\mathbb R)$. Let $A=USV^T$ be a singular value decomposition and let $Q=U^TRV$. The value of the objective function is then equal to $\|S-Q\|_F$. By considering the squared Frobenius norm, we see the optimisation of $\|A-R\|_F$ is equivalent to the optimisation of $\operatorname{tr}(SQ)$.
Suppose $Q$ is a global optimiser of $\operatorname{tr}(SQ)$. The usual calculus argument shows that $SQ$ must be symmetric, i.e. $SQ=(SQ)^T=Q^TS$. Hence $S^2=(SQ)(Q^TS)=(Q^TS)(SQ)=(Q^TSQ)^2$ and (by the uniqueness of positive semidefinite square root) $S=Q^TSQ$. Thus $S$ commutes with $Q$ and the eigenspace corresponding to each eigenvalue of $S$ is an invariant subspace of $Q$.
For each nonzero eigenvalue of $S$, since the restriction of $S$ on the corresponding eigenspace is just a scaling operator, the condition that $SQ$ is symmetric means that the restriction of $Q$ on that eigenspace is symmetric too. If $S$ has a zero eigenvalue, since the restriction of $Q$ on the null space of $S$ does not affect the value of $\operatorname{tr}(SQ)$, we can also assume that the restriction of $Q$ on that null space is symmetric.
In other words, there exists a global optimiser of $\operatorname{tr}(SQ)$ such that $Q$ is symmetric. Therefore, by simultaneous orthogonal diagonalisation, we may assume that $Q$ is diagonal. As $Q$ is also real orthogonal, its diagonal entries must be $\pm1$.
The argument up to this point is the same no matter we want to maximise or minimise $\|A-R\|_F$. With the observation that the optimal $Q$ can be taken to be a diagonal orthogonal matrix, it is now obvious that the global maximum of $\|A-R\|_F$ subject to $R=UQV^T\in SO(n,\mathbb R)$ is given by \begin{aligned} R&=-U\operatorname{diag}\left(1,\ldots,1,\det(-UV^T)\right)V^T,\\ \|A-R\|_F&=\sqrt{\sum_{i=1}^{n-1}(s_i+1)^2+\left(s_n+\det(-UV^T)\right)^2}. \end{aligned} where $s_1\ge s_2\ge\cdots\ge s_n\ge0$ are the singular values of $A$. In contrast, the global minimum of $\|A-R\|_F$ subject to $R\in SO(n,\mathbb R)$ is given by \begin{aligned} R&=U\operatorname{diag}\left(1,\ldots,1,\det(UV^T)\right)V^T,\\ \|A-R\|_F&=\sqrt{\sum_{i=1}^{n-1}(s_i-1)^2+\left(s_n-\det(UV^T)\right)^2}. \end{aligned}