Is there a simple proof that a non-invertible matrix reduces to give a zero row?

1.4k Views Asked by At

Let $A$ be a square matrix that is non-invertible. I was wondering if there is a simple proof that we can apply elementary row operations to get a zero row. (For a matrix $C$ to be invertible, I mean there is $B$ such that $CB = BC = I$.)

I can prove this using elementary column operations but I would like a more direct proof that doesn’t appeal to column operations or the fact that row rank equals column rank, or anything to do with transposes, or the existence of RREF, or determinants, etc. The difficulty seems to be that elementary row operations are applied on the row space, whereas invertibility is sort of defined in terms of the column space.

You could also use (but it is preferred not to) facts like: A matrix $C$ being invertible is equivalent to null space of $C$ being zero (i.e. injective) is equivalent to $C$ being surjective.

4

There are 4 best solutions below

6
On BEST ANSWER

If your matrix $A$ is $n\times n$, the search for an inverse matrix is the same as solving the $n$ linear systems $$ Ax=e_i\qquad (i=1,2,\dots,n) $$ where $e_i$ is the $i$-th column of the identity matrix. If the matrix is not invertible, then at least one of those systems must have no solution, say it's the one for $e_i$.

Performing row reduction on the augmented matrix $[A\mid e_i]$ yields that the last column must be a pivot column. If the pivot is on row $j$, then the row reduction of $A$ has a zero $j$-th row.

3
On

Probably you talk about square matrices. This might be a "direct proof": elementary row-operation do not change the fact whether or not a matrix is invertible. So, if you can never reach a zero row, you can do Gaussian elimination to convert it to a row-echelon form such as here -- but if the matrix is square and has no zero-rows, it can only be diagonal / identity matrix.

0
On

Assume a non-invertible matrix $A$ that does not reduce to a zero row. Then you can use the Gaussian elimination with multiple RHS to solve

$$AB=I$$ and this process will work till the end, giving (after back substitution) a solution such that

$$B=A^{-1}$$ !?


A zero row is what blocks the elimination process, making the complete resolution/inversion impossible.

0
On

Well it is known that if $c_1, ..., c_n \in \mathbb{R}^n$ are linearly independent then $A=(c_1, ..., c_n)$ is invertible so if A isn't invertible there exist $c_i$ with $1\leq i\leq n$ such that $c_i= \sum_{j=1}^{n} a_j c_j$ and $a_j=0$.