Well, I need to prove that if $A$ is a $n \times n$ matrix and it is singular, then $\det A = 0$, in order to show Binet's Theorem $\det(AB) = \det(A)\cdot\det(B)$ in the case when both $A$ and $B$ are singular square matrices.
I'll thank any help with this issue!
Consider $B=A \cdot C^T$ where $C$ is the cofactor matrix of $A$.
Then $b_{i,j}=\sum_k a_{i,k} \, c_{j,k}$, where $c_{j,k}$ is the $j,k$ minor of $A$.
If $i = j$ , that corresponds to the computation of the determinant of $A$ along the row $i$, hence $b_{i,i}=\det A$
If $i\ne j$ , that corresponds to the computation of the determinant of a matrix that's equal to $A$ except that the row $j$ has been overwriten by the contents of row $i$. But the determinant of a matrix with duplicated rows is zero, hence $b_{i,j}=0$.
This can be written $$A \cdot C^T = \det(A) \, I \tag{1}$$
If $\det A\ne 0$ then we can write $$ A \, \frac{C^T}{\det A}=I \iff A^{-1}=\frac{C^T}{\det A}=\frac{adj(A)}{\det A} \tag{2}$$
(This esentially amounts to rediscovering/proving Cramer's rule)
That is, $\det A\ne 0 \implies A^{-1} $ exists (hence, $A$ is non singular).
Consequently, if $A$ is singular (i.e. the inverse does not exist) then we must have $\det A=0$.