Say we're given a set of $d$ vectors $S=\{\mathbf{v}_1,\dots,\mathbf{v}_d\}$ in $\mathbb{R}^n$, with $d\leq n$ (obviously). We want to test in an efficient way if S is linearly independent. Now, write the coefficient matrix $\mathbf{A}=[\mathbf{v}_1 \dots\mathbf{v}_d]$ (the $\mathbf{v}_i$ are considered to be column vectors).
A non-efficient way would be to compute all minors of rank $d$ of $\mathbf{A}$, and check that they are non-zero (up to some tolerance, as always when we do numerical linear algebra). Another way would be using Gram–Schmidt orthogonalization, but I recall that Gram–Schmidt orthogonalization is numerically unstable. Which is the correct alternative? Singular Value Decomposition (if all singular values are strictly positive, then the vectors are independent)? QR factorization?
The standard numerical approaches would be computing a (rank revealing) QR decomposition or SVD. Both Numpy and Matlab use the SVD, and both algorithms are $nd^2$[1]. In general, whenever stability is a concern, the SVD tends to be preferred. However, even plain QR with householder reflections or modified Gram-Schmidt may give reasonable results in many cases.
As you allude to in the original question, there is always a trade off between accuracy and computation time, and the "best" algorithm depends on your application. The notion of "rank" itself is not well defined in finite precision, and in defining numerical rank it would be reasonable to take any of the equivalent characterizations of exact rank, and then relax them.
In general, one could define $A$ as numerically rank $k$ if all singular values $k+1, \ldots, n$ are less than some tolerance (this is what numpy and matlab do). Of course, even if you define it this way, how close your numerical method comes to computing the exact SVD of your finite precision matrix is another concern; i.e. two different implementations of an SVD algorithm would give different results.
On the other hand, if $1\ll k\ll n$ and you are satisfied with an approximation to the rank, then perhaps a randomized algorithm would be best.
I am not familiar with Smith normal form, but I would be very hesitant to use it in a general purpose numerical method as it seems like it would be susceptible to the same issues as Gaussian elimination.
[1] a rank-$r$ SVD can be computed somewhat more cheaply, so if you have an upper bound on the rank of your matrix then you may be able to do things somewhat faster.