Why does reducing matrix$\begin{bmatrix} 1&2&3\\ 3&6&9\\ 4&4&8\\ \end{bmatrix}$ to the row echelon form yield its rank $= 2$?

105 Views Asked by At

The matrix$\begin{bmatrix} a_{1}\\ a_{2}\\ a_{3}\\ \end{bmatrix}$ represents a vector with $a_{1}$ being $x$, $a_{2}$ being $y$, and $a_{3}$ being $z$ coordinate of the vector.
If we create another two vectors $\begin{bmatrix} b_{1}\\ b_{2}\\ b_{3}\\ \end{bmatrix}$ and $\begin{bmatrix} a_{1}+b_{1}\\ a_{2}+b_{2}\\ a_{3}+b_{3}\\ \end{bmatrix}$ we can represent all three vectors with a matrix $\begin{bmatrix} a_{1}&b_{1}&a_{1}+b_{1}\\ a_{2}&b_{2}&a_{2}+b_{2}\\ a_{3}&b_{3}&a_{3}+b_{3}\\ \end{bmatrix}$.
for example let $\begin{bmatrix} a_{1}&b_{1}&a_{1}+b_{1}\\ a_{2}&b_{2}&a_{2}+b_{2}\\ a_{3}&b_{3}&a_{3}+b_{3}\\ \end{bmatrix}$ = $\begin{bmatrix} 1&2&3\\ 3&6&9\\ 4&4&8\\ \end{bmatrix}$.
We can see because of the way we defined the third vector, the vectors are linearly dependend and we can see that by looking at the $2rd$ and $3rd$ $\bf{column}$ of the matrix $\begin{bmatrix} 1&2&3\\ 3&6&9\\ 4&4&8\\ \end{bmatrix}$, thus the rank of the matrix is 2 but that's not the only way to find the rank of the matrix. We can also reduce the matrix to the row echelon form and see how many $\bf{rows}$ are non zero.
Now, while I understand why the vectors are linearly dependend by looking at the matrix $\bf{columns}$, I don't see the intuition behind finding the rank of the matrix by reducing it to the row echelon form.
How would I come to understand that?

2

There are 2 best solutions below

1
On

Reducing a matrix to row echelon form, it shows the rank of the matrix because rank primarily shows how many "dimensions" there are within the matrix. In other words, how many linearly independent rows there are. If two rows are dependent, i.e., one is a multiplication of another, row reduction can make the other row go to zero by multiplying constants/fractions.

As Cheerful Parsnip has mentioned, row dimension = column dimension.

One helpful way to understand row dimension = column dimension "intuitively" is to think about $Rank(A) = Rank(A^T)$. The number of basis of matrix $A$ is not going to change after you transform it. If we define rank of a matrix is the rank of row space, the the rank of $A^T$ is the rank of $A$'s column space.

0
On

We have the fundamental theorem of linear maps (read a proof online): Suppose $V$ is finite-dimensional and $T \in \mathcal{L}(V, W)$. Then range $T$ is finite-dimensional and $$\text{dim }V = \text{dim null }T + \text{dim range }T.$$

An $m \times n$ matrix A can be viewed is a linear map from $\mathbb{R^n}$ to $\mathbb{R^m}$. When you reduce to rref, you are solving the vector equation $$Ax = 0$$ for x. This is exactly finding the null space of $A$. The number of free columns is the dimension of the solution space (null $A$), and so by the fundamental theorem of linear maps we see $$\text{dim }\mathbb{R}^n = \text{NUM FREE COLUMNS } + \text{dim range }A,$$ so $$\text{dim range }A = n - \text{NUM FREE COLUMNS } = \text{NUM PIVOT COLUMNS}.$$ Therefore the dimension of the range of $A$, which is the dimension of its column space, is the number pivot columns in its rref.

For matrices with real entries you can get this result: $$\text{dim (range }A)^\perp = \text{ dim null }A^T,$$ which states that the vectors orthogonal (using the dot product) to every column of $A$ are precisely the ones in the null space of $A$ transpose. This comes out directly from setting up the system of equations to find the vectors which are orthogonal to every column of A. From this, the fact that $$\text{dim range }A + \text{dim (range }A)^\perp = m,$$ and the fundamental theorem of linear maps, you can arrive at a nice result: $$\text{range }A = \text{range }A^T.$$

Therefore for real matrices, the column rank of the matrix is equal to the column rank of its transpose. Since the column rank of the transpose is equal to the row rank of the matrix, we get that $$\text{col rank }A = \text{row rank }A.$$