matrix pseudoinverse with additional term

97 Views Asked by At

I would like to solve:

$M =ABC$ for $B$ where $M$, $A$, and $C$ are known rectangular matrices of compatible dimensions.

e.g. $(1000\times 200) = (1000\times 30)\cdot B\cdot (14000\times 200)$

I am familiar with the process for solving for $B$ when there is no term $C$.

Thank you.

2

There are 2 best solutions below

1
On BEST ANSWER

There are two options. First if $A$ is injective and $C$ is surjective, then $A$ has a left inverse and $C$ has a right inverse, which can be computed as discussed here. Then, if $A^+A=I$ and $CC^+=I$, we have $B=A^+MC^+$.

Otherwise, $B\mapsto ABC$ is a linear map, so the equation $ABC = M$ can be solved using standard techniques for solving linear equations. Namely, let $M$ be $n\times m$, $A$ be $n\times p$, $B$ be $p\times q$ and $C$ be $q\times m$. Then write $B\mapsto ABC$ as a matrix by using the matrices $E_{ij}$, where $[E_{ij}]_{k\ell}=\delta_{ik}\delta_{j\ell}$ and $1\le i\le p$, $1\le j\le q$, as a basis for $M_{p\times q}$ (the matrices of size $p\times q$) and the analogous basis $F_{ij}$ of $M_{n\times m}$. Then $$[AE_{ij}C]_{k\ell}=\sum_{r=1}^p \sum_{s=1}^q a_{kr}[E_{ij}]_{rs}c_{s\ell} = \sum_{r=1}^p\sum_{s=1}^q a_{kr}\delta_{ri}\delta_{sj}c_{s\ell} = a_{ki}c_{j\ell}.$$

Now you can use this to write the linear map $B\mapsto ABC$ as a matrix, and solve it using standard matrix techniques like row reduction or something.

Edit: example of the second option using $2\times 2$ matrices. Let $$A=\newcommand{\bmat}{\begin{pmatrix}} \newcommand{\emat}{\end{pmatrix}} \bmat 1 & 2 \\ 3 & 6 \emat $$ and let $$C=\newcommand{\bmat}{\begin{pmatrix}} \newcommand{\emat}{\end{pmatrix}} \bmat 0 & 1 \\ 0 & 0 \emat. $$ Now if $$B=\newcommand{\bmat}{\begin{pmatrix}} \newcommand{\emat}{\end{pmatrix}} \bmat a & b \\ c & d \emat, $$ then $$ABC = \bmat 0 & a +2c \\ 0 & 3a + 6c \emat. $$ Rewriting $B$ as a column vector, so it becomes $\bmat a \\ b \\ c \\ d \emat$, we see that the matrix for $B\mapsto ABC$ is $$ \bmat 0 & 0 & 0 & 0 \\ 1 & 0 & 2 & 0 \\ 0 & 0 & 0 & 0 \\ 3 & 0 & 6 & 0 \emat. $$ Writing $M$ as $\bmat a' & b' \\ c' & d' \emat$, or as a column vector, $\bmat a' \\ b' \\ c' \\ d' \emat$, we can apply row reduction to the augmented matrix (which I can't draw with a vertical bar because it doesn't appear to be a standard latex command): $$ \bmat 0 & 0 & 0 & 0 & a' \\ 1 & 0 & 2 & 0 & b' \\ 0 & 0 & 0 & 0 & c' \\ 3 & 0 & 6 & 0 & d' \emat. $$ Row reducing, we get $$ \bmat 1 & 0 & 2 & 0 & b' \\ 0 & 0 & 0 & 0 & a' \\ 0 & 0 & 0 & 0 & c' \\ 0 & 0 & 0 & 0 & d'-3b' \emat. $$ Hence $M=ABC$ has a solution if and only if $a'=0$, $c'=0$, $d'=3b'$. In that case the solutions (parametrized by $t$) are given by $$ B\in \left\{\bmat b'-2t & 0 \\ t & 0 \emat:t\in\Bbb{R}\right\}. $$

0
On

@jgon implicitly uses the Kronecker product in order to solve the considered equation. When the matrices $A,C$ are large, this method has a great complexity and, consequently, should be avoided.

See my answer in

Is there any risk to transform to $(B^{T} \otimes A)\operatorname{vec}(X)=\operatorname{vec}(C) $ for solving $AXB=C$ for X

for an effective method when the matrices $A,C$ are square. In particular, ref ii) theorem 7.1. gives a unicity theorem for the solution of $AXD+EXB=C$.