Determinant at every step while finding matrix inverse

33 Views Asked by At

I've come to an intuitive conclusion that feels right and for which it seems there must be a proof, but I have been unable to locate one nor am I certain how to go about writing the proof. Therefore, my question is, "Is there a proof of the following?"

For a square matrix to have an inverse, the determinant must be non-zero. When finding the inverse using an augmented matrix and gaussian elimination, it feels intuitive that the left square matrix in that augmented matrix should always have a non-zero determinant since it is just a manipulated form of the original matrix... though I'm not certain it is entirely valid to look at the left portion of the augmented matrix before reaching the final inverted matrix.

This came up with a student as a potential way of figuring out that some arithmetic error has crept in should the determinant, for instance, suddenly become zero.

So, is there a proof of the above intuition?

1

There are 1 best solutions below

1
On

When performing elementary row operations on an invertible linear system, the operations will only scale the determinant by non-zero factors. So, it is not possible to have a zero-determinant during this process. In fact, when you reduce a linear system to RREF, you are actually multiplying the original coefficient matrix, $A$ by a product of elementary matrices, each of which has non-zero determinant.

$$Ax = b \rightarrow E_mE_{m-1} \cdots E_1 Ax = E_mE_{m-1} \cdots E_1 b.$$

So, because $\det{A} = \det(E_1)^{-1} \cdots \det(E_m)^{-1}$, if $\det(A) \ne 0$, it follows that none of the $E_i$ are zero.