Connections between two full rank decompositions

137 Views Asked by At

For a matrix $A_{m\times n},$ $r(A)=r.$ $A_{m\times n}=B_{m\times r}C_{r\times n},$ $r(B)=r(C)=r.$ This is called a full rank decomposition of $A$. The existence is easy, while we have the uniqueness in following meaning.

For two full rank decompositions $A=B_1C_1=B_2C_2,$ we have $B_1=B_2P$ and $C_1=P^{-1}C_2.$ $P_{r\times r}$ is a full rank matrix.

I thought it for quite a long time, hoping to find an brief answer but failed. I searched on MSE but didn't seek out similar questions. Then I tried it myself, using a quite brutal method. At last I succeed, and every step seems reasonable. As there's no similar question on MSE, I decide to post it as a Q&A by myself. Also, I hope someone can figure out a brief answer.

2

There are 2 best solutions below

6
On BEST ANSWER

If I understood correctly, you are interested in proving the following statement.

Suppose that $A = B_1C_1$ and $A = B_2C_2$ are rank decompositions, which is to say that the matrices $B_i$ are full rank of size $m \times r$ and the matrices $C_i$ are full rank of size $r \times n$, for $i = 1,2$. There exists an invertible $r \times r$ matrix $P$ such that $B_1 = B_2 P$ and $C_1 = P^{-1}C_2$.

To prove this, I will make use of the following result.

Claim: Suppose that $A$ is $m \times n$ with rank $n$. Then there exists a matrix $B$ such that $BA = I_{n}$. Similarly, if $A$ has rank $m$, then there exists a matrix $B$ such that $AB = I_{m}$.

In the above, $I_n$ denotes the identity matrix of size $n \times n$. This statement has been proved before on this site; I'll link a proof if I find one.

With that claim, proceed as follows. Let $Q_1$ denote a matrix such that $C_1Q_1 = I_r$ (which is guaranteed to exist by the claim above). We have $$ B_1C_1 = B_2 C_2 \implies \\ B_1C_1Q_1 = B_2 C_2Q_1 \implies\\ B_1 = B_2 (C_2Q_1). $$ Now, I claim that $P = C_2Q_1$ is invertible. Indeed, if this were not the case, then it would follow that the rank of $P$ is less than $r$, which would imply that $B_1 = B_2 P$ has rank less than $r$, which contradicts our initial supposition.

Thus, we have $$ A = B_1 C_1 = (B_2 P)C_1 = B_2C_2. $$ Using the claim again, let $Q_2$ denote a matrix such that $Q_2 B_2 = I_r$. We note that $$ B_2 PC_1 = B_2C_2 \implies\\ Q_2 B_2 PC_1 = Q_2B_2 C_2 \implies\\ PC_1 = C_2 \implies\\ C_1 = P^{-1}C_2, $$ which was what we wanted.

0
On

As $C_1$ is a full row rank matrix, it can be reduced in following form by row transformation: $$ P_1C_1=\begin{bmatrix} 0&\cdots&0&1&x_{1,\lambda_1+1}&\cdots&x_{1,\lambda_2-1}&0&x_{1,\lambda_2+1}&\cdots &x_{1,\lambda_3-1}&0&\cdots\\ &&&&&&&1&x_{2,\lambda_2+1}&\cdots&x_{2,\lambda_3-1}&0&\cdots\\ &&&&&&&&&&&1&\cdots\\ &&&&&&\cdots&\cdots&\cdots \end{bmatrix} $$ Let $B_1P_1^{-1}=[\alpha_1,\cdots,\alpha_r],$ then $$A=B_1P_1^{-1}P_1C_1=[0,\cdots,0,\alpha_1,x_{1,\lambda_1+1}\alpha_1,\cdots,x_{1,\lambda_2-1}\alpha_1,\alpha_2,x_{1,\lambda_2+1}\alpha_1+x_{2,\lambda_2+1}\alpha_2,\cdots]$$

Similarly, we have $P_2C_2$, replacing $x_{*,*}, \lambda_*$ by $y_{*,*},\mu_*.$ And $B_2P_2^{-1}=[\beta_1,\cdots,\beta_r],$ $$A=B_2P_2^{-1}P_2C_2=[0,\cdots,0,\beta_1,y_{1,\mu_1+1}\beta_1,\cdots,y_{1,\mu_2-1}\beta_1,\beta_2,y_{1,\mu_2+1}\beta_1+y_{2,\mu_2+1}\beta_2,\cdots]$$

Comparing two representation of $A$, we must have $\lambda_1=\mu_1,$ $\alpha_1=\beta_1.$ And then $x_{1,\lambda_1+1}=y_{1,\mu_1+1},\cdots.$ WLOG, assume $\lambda_2\le \mu_2,$ we have similar equality until $x_{1,\lambda_2-1}=y_{1,\lambda_2-1}.$ If $\lambda_2<\mu_2,$ we have $\alpha_2=y_{1,\lambda_2}\beta_1=y_{1,\lambda_2}\alpha_1.$ However, $B_1P_1^{-1}$ is of full column rank. This contradicts that $\{\alpha_1,\cdots,\alpha_r\}$ is linear independent.

So if $\lambda_i=\mu_i,$ $\alpha_i=\beta_i,$ $x_{j,\lambda_i+t}=y_{j,\mu_i+t}$ for all $i\le k-1,$ $j\le i,$ $\lambda_i+t\le \min\{\lambda_{i+1},\mu_{i+1}\}-1.$ By linear independence of $\{\alpha_1,\cdots,\alpha_{k}\},$ similarly we can prove that $\lambda_k=\mu_k$ first, and then $\alpha_k=\beta_k.$ Still by linear independence, $x_{i,\lambda_{k}+t}=y_{i,\mu_k+t}$ for all $j\le k,$ $\lambda_k+t\le \min\{\lambda_{k+1},\mu_{k+1}\}-1.$

Let $\lambda_{r+1}=\mu_{r+1}=n+1$. By conduction, we now have $P_1C_1=P_2C_2,$ $B_1P_1^{-1}=B_2P_2^{-1}.$ So let $P=P_2^{-1}P_1,$ it fulfills the condition.

So now we know the structure of full rank decomposition clearly, although it may not be as useful as other decompositions. However, it's still a good exercise for students who just contact linear algebra, as it only uses elementary method.