Mathematica gives the pseudo-inverse of a matrix almost instantaneously, so I suspect it is calculating the pseudo-inverse of a matrix not by doing singular value decomposition. Since the pseudo-inverse of a matrix is unique, is there a good formula that we can use to simplify our calculation in obtaining the pseudo-inverse, in place of compact singular value decomposition?
I'm thinking about the property that $A^\dagger = (A^\ast A)^{-1} A$, and I think this should give the unique pseudo-inverse of $A$. But I'm not too sure. Thanks for any insight!
Without being able to see your results, the following comments may lack full context.
The implementation of the pseudoinverse is modern and efficient, and will bypass the SVD if possible. Computation with numerical matrices are quite fast and are based at BLAS-level optimizations for your hardware.
Examples where $\mathbf{A}^{+}$ is constructed without the SVD are presented by user1551 in Find the pseudo-inverse of the matrix A without computing singular values of A.
Sequence of Jordan normal forms
These example bring your points to life. The SVD takes much longer to compute that the SVD.
$$ \begin{array}{cc} % \mathbf{A} = \left( \begin{array}{cccc} 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} \right) % & \mathbf{A}^{+} = \left( \begin{array}{crcc} 1 & -1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} \right) \\ % \text{SVD time} = 0.006 \text{ s} & \mathbf{A}^{+}\text{ time} = 0.0003 \text{ s} % \end{array} $$
$$ \begin{array}{cc} % \mathbf{A} = \left( \begin{array}{cccc} 1 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} \right) % & \mathbf{A}^{+} = \left( \begin{array}{crrc} 1 & -1 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} \right) \\ % \text{SVD time} = 0.008 \text{ s} & \mathbf{A}^{+}\text{ time} = 0.0003 \text{ s} % \end{array} $$
$$ \begin{array}{cc} % \mathbf{A} = \left( \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \\ \end{array} \right) % & \mathbf{A}^{+} = \left( \begin{array}{crrr} 1 & -1 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 1 & -1 \\ 0 & 0 & 0 & 1 \\ \end{array} \right) \\ % \text{SVD time} = 0.011 \text{ s} & \mathbf{A}^{+}\text{ time} = 0.0004 \text{ s} % \end{array} $$
The SVD is an invaluable theoretical tool as it resolved the four fundamental spaces, aligns them, and computes the scale factors. As seen in How does the SVD solve the least squares problem? it provides a natural expression for the solution of the least squares problem. It is quite useful for understand different forms of the Moore-Penrose pseudoinverse: What forms does the Moore-Penrose inverse take under systems with full rank, full column rank, and full row rank?, Pseudo-inverse of a matrix that is neither fat nor tall?. Yet as you point out, there are oftern faster ways to compute $\mathbf{A}^{+}$.
If you think the SVD is too slow
A common issue is when users create matrices with integers and special constants like $\pi$ and complain the SVD is too slow. You will find relief if you request instead of an arbitrary precision result with
SingularValueDecomposition[A]
you request a numeric precision result withSingularValueDecomposition[N[A]]
.