Any idea which matrix theorem this is?

93 Views Asked by At

I came across a theorem that boyd uses to convert the simplex to the form of a polyhedra. I don't know anything about this theorem. Theorem states:

If $B$ has rank $k$, then we can find two matrices $A_1$ and $A_2$ such that: $AB = \left[\begin{array}{c}A_1 \\ A_2\end{array}\right]B = \left[\begin{array}{c}I \\ 0\end{array}\right]$

Can someone give me some direction?

2

There are 2 best solutions below

0
On BEST ANSWER

I am presuming you mean that $B$ has full rank, and the dimension of $B$'s domain is $k$ which is greater than or equal to the dimension of the codomain. (I am assuming greater than in the following, as if the dimensions are the same then you can trivially take $A= B^{-1}$.)

(If $B=\begin{bmatrix} 1 & 0 \end{bmatrix}$, then $B$ has rank $1$, but no matrix $A$ can transform $B$ into the above form.)

Hence we may suppose $B$ has the form $B = \begin{bmatrix} b_1 & \cdots & b_k \end{bmatrix}$, where $b_k$ are linearly independent. Since we are assuming that $B$ has rank $k$, we can find a permutation matrix $\Pi$ such that the top $k \times k$ portion of $\Pi B$ is invertible. Write $\Pi B = \tilde{B} = \begin{bmatrix} \tilde{B_1} \\ \tilde{B_2} \end{bmatrix}$, where $\tilde{B_1}$ is $k \times k$. Then let $\tilde{A}= \begin{bmatrix} \tilde{B_1}^{-1} & 0 \\ -\tilde{B_2}\tilde{B_1}^{-1} & I \end{bmatrix}$, and notice that $(\tilde{A} \Pi) B = \begin{bmatrix} I \\ 0 \end{bmatrix}$. To finish, let $A = \begin{bmatrix} A_1 \\ A_2 \end{bmatrix} = \tilde{A} \Pi$.

2
On

One alternative (but not computationally recommended) way to see this is to apply singular value decomposition (SVD). Suppose $B$ is $n\times k$, where $n\ge k=\operatorname{rank}(B)$. By SVD, there exist an $n\times n$ unitary matrix $U$, a $k\times k$ positive diagonal matrix $\Sigma$ and a $k\times k$ unitary matrix $V$ such that $$ B = U\begin{pmatrix}\Sigma\\0\end{pmatrix}V^\ast. $$ Rewriting this as $$ B = U\begin{pmatrix}\Sigma V^\ast\\&I_{n-k}\end{pmatrix}\begin{pmatrix}I_k\\0\end{pmatrix},\ \text{ i.e. } \ \underbrace{\begin{pmatrix}V\Sigma^{-1}\\&I_{n-k}\end{pmatrix}U^\ast}_{A}\ B = \begin{pmatrix}I_k\\0\end{pmatrix}. $$ So, $A_1$ is the first $k$ rows of $A$ and $A_2$ is the last $n-k$ rows of $A$.

A similar way to obtain $A$ is to bring $B$ to a row-reduced echelon form by left-multiplication of a product of elementary matrices first, and then bring the new $B$ to a column-reduced echelon form by right-multiplication of a product $E$ of elementary matrices. Then move $E$ from the right to the left in a manner like we move $V^\ast$ from the right to the left in the above.