Linear Algebra: Proof Question Concerning Uniqueness of RREF and Use of Fields

212 Views Asked by At

I am trying to figure out why the Reduced Row Echelon Form of a matrix is unique with this proof I found on the internet but one particular part of this proof is confusing me (here is the website). Now, this proof looks fine in my opinion, except I would notate $(B_{jn}-C_{jn})u_n=0$ as $u_n(B_{jn}-C_{jn})=0$, so that way $u_n\in \mathbb{F}$ and on the left hand side (I could be wrong though). Now, suppose $u_n\neq 0$. Thus, multiplying on the left by $u_n^{-1}$ yields $B_{jn}-C_{jn}=u_n^{-1}0$. However, how do we know that $u_n^{-1}0=0$ to deduce that $B_{jn}=C_{jn}$ which is the contradiction I want.

2

There are 2 best solutions below

1
On BEST ANSWER

So, you essentially want to show that scalar multiplication by the zero matrix yields the zero matrix.

Note that $n \times m$ matrices form an additive group, and the matrix $0_{n \times m}$, the matrix consisting of all zeros, is the identity of that group. Thus $0_{n \times m} = 0_{n \times m} + 0_{n \times m}$. Furthermore, scalar multiplication distributes over matrix addition, which allows us to say that for any scalar $a \in F$,

$a0_{n \times m} = a(0_{n \times m} + 0_{n \times m}) = a0_{n \times m} + a0_{n \times m}$.

Now by the cancellation property of groups, we may say that $0_{n \times m} = a0_{n \times m}$.

Alternatively, one can see that this follows by first proving the analog for rings, and then using the definition of scalar multiplication for matrices.

0
On

I don't find that proof particularly simple. The key fact for the RREF $U$ of the matrix $A$ is that there exists an invertible matrix $F$ such that $U=FA$.

The second key fact is that a column of $U$ is a linear combination of other columns of $U$ if and only if the corresponding column of $A$ is the linear combination of the corresponding columns of $A$ with the same coefficients. More precisely, if $\{i_0,i_1,\dots,i_r\}$ are distinct column indices, then $$ u_{i_0}=c_1u_{i_1}+\dots+c_ku_{i_r} \quad\text{if and only if}\quad a_{i_0}=c_1a_{i_1}+\dots+c_ka_{i_r} $$ (where $u_i$ are the columns of $U$ and $a_i$ the columns of $A$). This is an easy consequence of how matrix multiplication is performed.

Now the pivot columns in $U$ are the first $k$ columns of the identity matrix (where $k$ is the rank of $A$), so the nonpivot columns in $U$ just tell how this column is obtained as a linear combination of the pivot columns at its left. This only depends on the linear relations between the columns of $A$. The pivot columns are uniquely determined, because a column is pivot if and only if it is not a linear combination of the ones at its left.

Just to make an example, if $U$ is $$ \begin{bmatrix} 1 & 2 & 0 & -3 & 1 & 0 & 2 \\ 0 & 0 & 1 & 2 & 0 & 0 & 3 \\ 0 & 0 & 0 & 0 & 0 & 1 & 4 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{bmatrix} $$ then we have, for the columns of $A$, $$ \begin{cases} a_2=2a_1 \\ a_4=-3a_1+2a_3 \\ a_5=a_1 \\ a_7=2a_1+3a_3+4a_6 \end{cases} $$ because of the corresponding relations between the columns of $U$.