The terms "Jacobian" and "Hessian" can both either refer to a matrix, or the determinant of that matrix. However, the term "Wronskian" seems only to refer to a determinant. My understanding from Wikipedia is that the matrix whose determinant is the Wronskian is called the fundamental matrix.
Assuming this is the correct use of the term, a set of solutions to a 2nd-or-higher order ODE should be linearly independent if and only if the fundamental matrix is invertible. Can means other than the determinant be used to calculate this? For example, can elementary row and/or column operations be performed on the fundamental matrix, until linear independence by inspection is possible in some way? If so, can a row/column be multiplied by any expression in $x$, and any expression-in-$x$ multiple of one row/column added to another, since the entries of the matrix are expressions in $x$, or are these operations still restricted to scalars?
Yeah, you can use elementary row and column operations. Consider the two-function case, with functions $f(x)$ and $g(x)$. If $f$ and $g$ are linearly dependent, then the Wronskian does not hold (the converse isn't quite true, but it very often is). In any case, say we want to show that $x$ and $x^2$ are linear dependent. The fundamental matrix is $$ \begin{pmatrix} x^2 & x \\ 2x & 1\end{pmatrix}$$ and so we could show these two functions are linearly independent by noting the corresponding determinant $-x^2$ is not identically $0$. The easiest thing to think about in my head from here, is that we know how elementary operations affect the determinant. Swapping rows negates the determinant, scaling rows scales it, and adding rows doesn't affect it. So for instance, we can multiply the bottom row of this matrix by $-x$ to get that $$ \frac{1}{-x}\begin{vmatrix} x^2 & x \\ -2x^2 & -x\end{vmatrix}$$ also must be identically zero for linear dependence. The key point, is that almost all of time, except the weird case where the wronskian is only nonzero at $x=0$, which is impossible for differentiable functions anyway, I think, we can ditch the scalar function of $x$, since the wronskian is nonzero iff the determinant in the above is nonzero, for the vast majority of the time when $\frac{-1}{x}$ is finite. So, really one can thus consider $$\begin{pmatrix} x^2 & x \\ -2x^2 & -x\end{pmatrix} $$ then one can, not changing the determinant, add the bottom row to the top to consider $$\begin{pmatrix} -x^2 & 0 \\ -2x^2 & -x\end{pmatrix} $$ which is obviously nonzero. More strikingly, though, one could continue until you get a matrix that's diagonal. For instance, scale the top row by $-2$ to get $$ \frac{1}{2x} \begin{vmatrix} 2x^2 & 0 \\ 2x^2 & -x\end{vmatrix} $$ and then adding the top row to the bottom one gets $$\frac{1}{2x}\begin{vmatrix} 2x^2 & 0 \\ 0 & -x \end{vmatrix} $$ and since all our elementary row operations have been been nice (like, none of them correspond to division by $0$ in more than a finite set of places), for the nonzeroness of the wronskian one can more or less just consider $$ \begin{pmatrix}2x^2 & 0 \\ 0 & -x \end{pmatrix} $$ which is diagonal with nonzero entries when $x \neq 0$, and thus obviously not uniformly zero.
So, in short, for most nice functions, you can do elementary row operations on the wronskian pretty much without worry, and you can row multiply by a function of $x$. The only real concern is that you might row multiply by a function of $x$ that is $0$ on the whole support of the wronskian, but just thinking about what each row operation does to the determinant, one can perform this as one likes.