Proofs of the fact that a matrix is invertible iff its determinant is non-zero generally begin by saying "Define the determinant to be [very complicated formula]. We will now prove the result...". This is obviously unsatisfactory to many people.
Other proofs begin by listing axioms that the determinant should verify, and then prove that such a function exists (and is unique) and is given by the given formula. This isn't much better - why should we be interested in these axioms? Without knowing better, there isn't even any reason to suspect there exists any polynomial or simple function at all such that $f(A)=0$ iff $A$ is singular.
If you apply Gaussian elimination to a general 2x2 or 3x3 matrix, you get tantalizingly close, because the determinant formula arises naturally from the calculations, and it's clear that indeed, if it's zero, the matrix must be invertible. The trouble is that in order to get to that point, you need to start assuming that this, that and the other is non-zero so that you can divide by them, and your assumptions begin branching: if that's non-zero, then assume this is non-zero, otherwise... So it's difficult to see how this could be turned into a proper proof.
Can the proof outlined in the last paragraph be carried out rigorously? Or something like it?
Yes, by preforming row operations the determinant is multiplied by a nonzero constant. Thus it suffices to prove this for reduced row echelon matrices. Such a matrix is either the identity or has a zero row.