$\newcommand{\ep}{\epsilon}$
Let $GL_n^+$ be the Lie group of invertible $n \times n$ matrices with positive determinant. In particular it's a connected open submanifold of the Euclidean space $\mathbb{R}^{n^2}$.
Now consider it with the induced metric from $\mathbb{R}^{n^2}$. (So it's a Riemannian submanifold of $\mathbb{R}^{n^2}$).
For clarity, this means endowing $GL_n^+ $ with the pullback metric of the Euclidean metric $e$ along the inclusion $ i:GL_n^+ \to (\mathbb{R}^{n^2},e) $. Explicitly: $g_z(X,Y) =tr(X^TY)$ , where $X,Y \in T_z(GL_n^+)$.
Questions:
(1) Is there an explicit formula for the Riemannian distance between two matrices $A,B \in GL_n^+$?
Conjecture: The Riemannian distance equals the Euclidean one.
Attempted proof:
Since $GL_n^+$ is open in $\mathbb{R}^{n^2}$, it follows that it's a totally geodesic submanifold. (That is, all it's geodesics are geodesics in $(\mathbb{R}^{n^2},e)$, i.e they are the usual straight lines in Euclidean space).
$GL^+(\mathbb{R}^n)$ is open $\Rightarrow$ for any $A \in GL^+(\mathbb{R}^n)$, there is an Euclidean ball centered around it which is contained in $GL_n^+$. Hence, for all matrices close enough to $A$ their distance from $A$ is just the Euclidean one. (since the straight line beteen them is in our submanifold).
Now consider $A,B \in GL_n^+$. Let $\alpha:[0,1] \to GL_n^+$ be the straight line path between them. Then:
$$ \det(\alpha(t))=\det(A+t(B-A)) $$ is a polynomial in $t$ of degree $\le n$ . Hence, it has only finitely many zeroes. This implies there are no more than $n$ points $t_i$ where $\alpha(t_i)$ is not invertible.
Hence, we only need to show we can make arbitrary small perturbations around each such 'bad' non-invertible matrix. This would imply the Riemannian distance equals the Euclidean one.
It would be nice if someone could find a neat argument to show this maneuver is indeed possible.
Update: This conjecture is false. The key point (as noted by Jason DeVito and loup blanc) is that the determinant is negative for non-negligible parts of the straight-line path $\alpha$. Now, by continuity argument, any path which approximates too closely the straight path must enter a region of negative determinant.
It turns out that the behaviour depends on the number of sign changes of the determinant.
Example for a case where the distance is Euclidean (a "jump" is possible): take $n=2$,$A,B=\pm Id$. Start with a path from $Id$ to $\begin{bmatrix}\ep & 0 \\ 0 & \ep\end{bmatrix}$. Then go via
(1) $t \to \begin{bmatrix}\ep & -t \\ t & \ep\end{bmatrix}$ to $\begin{bmatrix}\ep & -\ep \\ \ep & \ep\end{bmatrix}$ ($t$ goes $0 \to \ep)$ .
(2) $t \to \begin{bmatrix}t & -\ep \\ \ep & t\end{bmatrix}$ to $\begin{bmatrix}-\ep & -\ep \\ \ep & -\ep\end{bmatrix}$ ($t$ goes $\ep \to -\ep)$.
(3) $t \to \begin{bmatrix}-\ep & -t \\ t & -\ep\end{bmatrix}$ to $\begin{bmatrix}-\ep & 0 \\ 0 & -\ep\end{bmatrix}$ ($t$ goes $\ep \to 0)$ .
Now continue with straight line until reaching $-Id$.
How much this maneuver cost us?
The derivatives of the 3 broken straight paths we took were: $\begin{bmatrix}0 & -1 \\ 1 & 0\end{bmatrix}, Id , \begin{bmatrix}0 & -1 \\ 1 & 0\end{bmatrix}$. their norms are $\sqrt 2$. Hence, the total lenght is $\sqrt2 \cdot 4\ep$ which is arbitrarily small, as required.
(Also, note that the determinant was always $t^2 + \ep^2 > 0$ so we stayed in $GL_n^+$).
(2) Can we compute explicitly for a given $A \in GL_n^+$, it's distance from $SO(n)$?
$dist(A,SO(n)) =\underset{X \in SO(n)}{\text{min}} d(A,X)$
($SO(n)$ is the special orthogonal group and the minimum exists since $SO(n)$ is compact and $d$ is continuous)
And who is the minimizer (the closest matrix to $A$ in $SO(n)$)? Is it unique?
Note: Right or Left multiplication by elements of $SO(n)$ are isometries of $GL_n^+$ with the induced metric. Thus, $d$ is left (right)-$SO(n)$ invariant.
In particular, if $ A = U\Sigma V^T $ is the SVD-dscomposition of $A$, then: $dist(A,SO(n)) = dist(\Sigma,SO(n)) $ , where $\Sigma$ is a square, diagonal matrix whose diagonal elements are the (strictly positive) singular values of $A$.
So for the question of computing the distance from $SO(n)$ (and the minimizer) is reduced to matrices of this type. (i.e diagonal + positive entries).
Travis gives a good reference dated 2014; there is another dated 2011: cf. http://arxiv.org/abs/1109.0520
If $A,B\in GL_n^+(\mathbb{R})$, there is no closed form for $d(A,B)$. Note that one can study, in a same way, $GL_n(\mathbb{C})$.
We consider the scalar product on $GL_n$: $<M,N>=tr(M^TN)$. It is left and right $O(n)$-invariant. We deduce a LEFT Riemannian metric defined in $Z\in GL_n$ by $g_Z(X,Y)=tr((Z^{-1}X)^TZ^{-1}Y)$.
Consider a geodesic curve $X:[0,1]\rightarrow GL_n^+$; it is a solution of the second order ODE (1): $X'=XU,U'=[U^T,U]$ where $U\in C^1([0,1],M_n)$.
The locally unique solution of $\{(1),X(0)=X_0,X'(0)=X_0V_0\}$ is $X(t)=X_0e^{tV_0^T}e^{t(V_0-V_0^T)}$ and is defined on whole $\mathbb{R}$.
Note that the two exponentials simplify iff $V_0$ is a normal matrix; in this case $X(t)=X_0e^{tV_0}$.
Finally the previous distance is right invariant under rotations and left invariant with respect to action of $GL_n^+$.
I think that you can deduce the geodesic distance of $A\in GL_n^+$ to $SO_n$.
EDIT 1. Of course, the previous sentence is a joke. Yet, there is an easy instance: let $k>0$. Then $d(X_0,kX_0)=d(I_n,kI_n)=\sqrt{n}|\log(k)|$. Consequently $d(kI_n,SO(n))=\sqrt{n}|\log(k)|$.
EDIT 2. Answer to Asaf. OK, I understand. If we choose the metric $g_Z(X,Y)=tr((Z^{-1}X)^TZ^{-1}Y)$ (Riemannian manifold $\mathcal{R}$), then when $X(t)$ goes in the direction of a non-invertible matrix $S$, the path expands increasingly (due to the factor $Z^{-1}$) and the time to reach $S$ is infinite. If the chosen metric is $g_Z(X,Y)=tr(X^TY)$ (Riemannian manifold $\mathcal{E}$), then this phenomenon does not occur.
$\mathcal{R}$ has the advantage of being geodesically complete and, consequently, complete as a metric space and although $\mathcal{R}$ is not compact, any two points $A$ and $B$ can be connected with a geodesic whose length is $d(A,B)$. Yet, there may be several such geodesics and the result above: $d(I_n,kI_n)=\sqrt{n}|\log(k)|$ must be proved (intuitively, I am sure that it is true). The advantage of $\mathcal{E}$: the geodesic curves are very simple and, from any matrix $A$, we can aim at any matrix $B$ (yet we can meet a wall in a finite time).
EDIT 3. The answer to your question is NO. Take $A=\begin{pmatrix}-1/2&-1/4\\-1/2&-1/2\end{pmatrix},B=\begin{pmatrix}3/2&3/4\\1/2&1/2\end{pmatrix}\in GL_2^+$; then $\det(A+t(B-A))=(t-1/4)(t-1/2)$. The geodesic $X(t)$ meets twice ($t=1/2,3/4$) the algebraic cone $\det(U)=0$ in two points $P,Q\notin GL_n$ that are distinct from the apex $0$ of the cone, with -each time- a change of the signum of $\det(X(t))$. The direction of the geodesic $[2,1,1,1]$ is not tangent to the cone in $P,Q$. Then, these intersections between the geodesic (dimension $1$) and the cone (dimension $3$) are transversal in a vector space of dimension $3+1=4$. Then a small perturbation of $X(t)$ does not allow to cross to the other side of the mirror.
This impossibility occurs when the function $t\in (0,1)\rightarrow\det(A+t(B-A))$ have at least two changes of signum.
EDIT 4. Proposition. Let $\Sigma=diag((\sigma_i))$ as in the Asaf's note. Then, for the Euclidean metric on $GL_n^+$, $d(\Sigma,SO_n)=d(\Sigma,I_n)$, the $\min$ being reached only for $I_n\in SO_n$.
Proof. If $B\in SO_n$, then $||B-A||_F^2=n+tr(\Sigma^2)-2\sum_i\sigma_i b_{i,i}$. Thus we seek $\sup_{B\in SO_n}(\sum_i\sigma_i b_{i,i})$. Clearly the sup is obtained for a SOLE point, $B=I_n$. Note that $[\Sigma,I_n]\subset GL_n^+$ and we are done.