Proof involving row equivalent matrices

432 Views Asked by At

I am having trouble understanding the following proof for row equivalent matrices. As an aside, I also do not fully understand matrix of the change of basis.

Let $A$ be a $m\times{n}$ matrix with rank $r$.

Then, each matrix $A$ is equivalent to a matrix $B=PAQ$ of the form :

$$PAQ=\begin{pmatrix} I_{r} & 0\\ 0 & 0\end{pmatrix}=B$$

where $P,Q$ are invertible matrices, in which for $r\ge0$, the first $r$ elements on the diagonal are $1$ and all the other elements of the matrix are $0$. Both the row rank and the column rank of $B$ are equal to $r$.

Proof.

Step 1.

Let $T:V\to{W}$ be a linear map such that the matrix of $T$, relative to the bases $\mathscr{B}_{1}$ and $\mathscr{B}_{2}$.

$$[T]_{\mathscr{B}_{1}}^{\mathscr{B}_{2}}=A$$

Since, $rank(A)=r$, we have $rank(T)=r$. Invoking the rank-nullity dimension theorem, $nullity(T)=n-r$.

Step 2.

Construct the bases $\mathscr{B}'_{1}$ and $\mathscr{B}'_{2}$.

Let $\{u_{1},u_{2},\ldots,u_{n-r}\}$ be a basis of the $N(T)$. We extend this basis to form a basis of $V$.

Let

$$\mathscr{B}'_{1}=\{u_{1},u_{2},\ldots,u_{n-r},u_{n-r+1},\ldots,u_{n}\}$$

be a basis of $V$. From the rank-nullity dimension theorem, we know that the $r$ vectors

$$\{Tu_{n-r+1},\ldots,Tu_{n}\}$$

form a basis for the range space, $R(T)$. We extend this to form a basis of $W$.

Let

$$\mathscr{B}'_{2}=\{Tu_{n-r+1},\ldots,Tu_{n},v_{1},\ldots,v_{m-r}\}$$

be a basis of $W$.

Re-ordering the elements of $\mathscr{B}'_{1}$ as,

$$\mathscr{B}'_{1}=\{u_{n-r},u_{n-r+1},\ldots,u_{n},u_{1},u_{2},\ldots,u_{n-r}\}$$

Step 3.

Then, by definition

$$[T]_{\mathscr{B}'_{1}}^{\mathscr{B}'_{2}}=\begin{pmatrix}I_{r} & 0_{1\times(n-r)} \\ 0_{(m-r)\times{r}} & 0_{(m-r)\times(n-r)}\end{pmatrix}$$

Hence, $$PAQ=\begin{pmatrix}I_{r} & 0 \\ 0 & 0\end{pmatrix}$$.


How, as if it were magical, did we go from step 2 to step 3? What is the intuition behind this result? What are the consequences of this result?

I know the result for the matrix of a change of basis. The matrix of a transform $T$ relative to bases $\mathscr{B'_{1}}$, $\mathscr{B'_{2}}$ is :

$$[T]_{\mathscr{B'_{1}}}^{\mathscr{B'_{2}}}=[I]_{\mathscr{B'_{2}}}^{\mathscr{B_{2}}}[T]_{\mathscr{B_{1}}}^{\mathscr{B_{2}}}[I]_{\mathscr{B_{1}}}^{\mathscr{B'_{1}}}$$

How is this result derived in the first place? Any links would be helpful!

1

There are 1 best solutions below

2
On BEST ANSWER

Make sure you understand what this result is saying in terms of vector spaces and bases here: it is actually a very simple result, only there is some stuffing around to do with indexing bases.

Let $T: V \to W$ be a linear map, where $V$ and $W$ are finite-dimensional spaces. Pick any subspace $V_1 \subseteq V$ complementary to $\ker T$, so that $V = V_1 \oplus \ker T$. Then, pick any basis $v_1, \ldots, v_r$ of $V_1$, and $v_{r+1}, \ldots, v_n$ of $\ker T$. We then know that $Tv_1, \ldots, Tv_r$ span the image of $T$ (this is essentially the only nontrivial part of this: make sure you can prove it!). Set $w_1 = Tv_1$, $\ldots$, $w_r = Tv_r$, and extend this partial basis up to a full basis $w_1 \ldots, w_{m}$ of $W$ by some choice of a subspace complementary to the image of $T$.

Now, look at how the linear map $T$ acts in this basis: $$ T v_i = \begin{cases} w_i & \text{for } 1 \leq i \leq r \\ 0 & \text{otherwise} \end{cases}$$

So this is already precisely a block matrix of the form $$ \begin{pmatrix} I_r & 0 \\ 0 & 0 \end{pmatrix} $$

You can also see that $r$ must be equal to the dimension of the image of $T$, in other words, the rank of $T$. (This basically gives a proof of the rank-nullity theorem).

The rest of the result, in terms of matrices, really just follows by understanding how changes of bases work; if $T$ is represented by some matrix $A$, then a change of basis in $V$ is the same as multiplication on the right of $A$ by some invertible matrix, and a change of basis in $W$ is really just a multiplication on the left of $A$ by some invertible matrix.