Proof that if a matrix is invertible, its rank is maximum

1.1k Views Asked by At

I have to prove that if a square matrix $A \in \mathfrak{M}_n (\mathbb{K})$ is invertible, then $rg(A) = n$. The thing is I cannot use vector spaces, subspaces, etc... to prove this, only matrix theory (elemental transformations, operations with matrices, elemental matrices etc...).

I have managed to prove the contrarreciprocal (i.e if $rg(A) < n \implies \nexists A^{-1}$) using a rather complicated argument. Consider the following statements:

  1. $A$ is invertible.
  2. $BA=0 \implies B=0$.
  3. $AB=0 \implies B=0$.
  4. $rg(A) = n$.

I proved $(1) \implies (2) \implies (3)$. Then I proved $(2) \implies (4)$ by proving the contrarreciprocal (i.e. $\neg(4) \implies \neg (2)$. This effectively proves $\neg (4) \implies \neg (1)$, if I prove first $\neg(2) \implies \neg(1)$ (which I did), thus $(1) \implies (4)$, which is what I wanted to prove.

For anyone interested, below is my proof (which is mostly irrelevant to my question, you can skip reading it if you want).

Proof

$(1) \implies (2)$. If $A$ is invertible and $BA=0$, then multiplying by $A^{-1}$ from the right we get $BAA^{-1} = 0A^{-1} \implies BI_n=B=0$.

$(2) \implies (3)$. Similarly, If $A$ is invertible and $AB=0$, then multiplying by $A^{-1}$ from the left we get $A^{-1}AB=A^{-1}0 \implies I_nB=B=0$.

$\neg(4) \implies \neg (2) \implies \neg (1)$. Let's assume that $rg(A)<n$, and let $H_A^f$ be the hermite normal form by rows of $A$. As $rg(A)<n$, the last row of $H_A^f$ will be entirely made up of zeros. Now consider the following square matrix $D \in \mathfrak{M}_n (\mathbb{K})$, which has zeros in all of its entries except in the last one, in which there is a $1$:

$$D=\begin{pmatrix}0 & 0 & \cdots & 0 & 0 \\ 0 & 0 & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 0 & 0 \\ 0 & 0 &\cdots & 0 & 1 \end{pmatrix}$$

Obviously, $DH^f_A=0$. Now $H^f_A$ can be expressed as the product of certain elemental matrices, as $H^f_A$ is obtained from $A$ by applying certain elemental row transformations. So $H^f_A=E_kE_{k-1}...E_2E_1A$, where $E_k,E_{k-1},...,E_2,E_1$ are all elemental matrices. Thus $$0=DH^f_A=D(E_kE_{k-1}...E_2E_1A)=(DE_kE_{k-1}...E_2E_1)A$$ Thus concluding $\neg(2)$, as we have found a matrix $B:=DE_kE_{k-1}...E_2E_1\neq0$ such that $BA=0$ (the reason why $B\neq 0$ is that the product of elemental matrices is invertible, and thus if $D(E_kE_{k-1}...E_2E_1)=0$, we can multiply by $(E_kE_{k-1}...E_2E_1)^{-1}$ from the right, which should yield $D=0$ according to step $(2) \implies (3)$ (which is true and was proven), but is obviously a contradiction because of the way I constructed $D$).

Question: is there a (simpler) alternative way to prove this, preferrably a direct proof, that only uses simple tools and concepts like the ones used in my proof (i.e. matrix theory)?

Also, can someone confirm the validity of my proof?