Distributive property of Hadarmad product wrt Matrix multiplication

1.2k Views Asked by At

I have a matrix multiplication $\mathbf{B} \cdot \Omega, \quad$ where $\mathbf{B}$ is $n \times m$, with $n>m, \quad$ and $\Omega$ is a vertical vector $m \times 1$, with $m>1$.

The components of $\Omega$ are defined in this way: $$ \omega_i = k_i + x_i p_i $$ Now I wish to separate the various components parts (because $k_i$ are constant, $p_i$ are variable but known, and $x_i$ are variable and unknown). My goal is to "find" the $x_i$ components, and all other values are known.

If I call $\mathrm{K}, \mathrm{X}, \mathrm{P}$ the vertical vectors each containing the respective components, and $\mathbf{M}_P$ the diagonal matrix having the $p_i$ component on the diagonal, such that: $$ \Omega = \mathrm{K} + (\mathrm{X} \circ \mathrm{P}) \\ = \mathrm{K} + \mathbf{M}_P \cdot \mathrm{X} $$ where $\mathrm{A} \circ \mathrm{B}$ is the element-wise Hadarmad product,
is it possible to have the following equivalence (or something similar)? $$ \mathbf{B} \cdot \Omega = \mathbf{B}\cdot\mathrm{K} + \mathbf{B}_1\cdot\mathrm{X} + \mathbf{B}_2\cdot\mathrm{P} $$ And if the answer is yes, which are the $\mathbf{B}_1$ and $\mathbf{B}_2$ matrices?
Is it possible to have either $\mathbf{B}_1 = \mathbf{B}$ or $\mathbf{B}_2 = \mathbf{B}$ so that I can put $\mathbf{B}_u = f(\mathbf{X})$ ? ($\mathbf{B}_u$ is the unknown matrix, so for example if $\mathbf{B}_2 = \mathbf{B}$ then $\mathbf{B}_1 \equiv \mathbf{B}_u$)

I hope the problem is well-posed and my english (and math :D ) is understandable.

Thanks everyone for the help

Eventine

1

There are 1 best solutions below

3
On BEST ANSWER

$ \def\L{\left}\def\R{\right}\def\LR#1{\L(#1\R)} \def\Diag#1{\operatorname{Diag}\LR{#1}} $Let's use lowercase letters to denote vectors and reserve uppercase letters for matrix variables. Then you can write your equation as $$w = k + p\odot x$$ where $(\odot)$ denotes the elementwise/Hadamard product.

Define the diagonal matrix $$P=\Diag{p} \quad\implies\quad Px = p\odot x$$ Now you can solve for the unknown vector directly $$\eqalign{ Px &= (w-k) \qquad\implies\quad x = P^{-1}(w-k) \\ }$$ Multiply the solution vector by $B_1$ $$\eqalign{ &B_1x = B_1P^{-1}w - B_1P^{-1}k \\ &B_1P^{-1}w = B_1P^{-1}k + B_1x \\ &Bw = Bk + BPx + 0p \qquad\implies\quad B=B_1P^{-1} \\ }$$ (This solution was pointed out in the comments to your post)


Unfortunately, there is no distributive rule for Hadamard-Matrix products, in general.

However, there is a specialized rule for a pair of dense matrices $(A,B)$ and a pair of diagonal matrices $(X,Y)$ $$\eqalign{ \LR{A\odot B}\LR{X\odot Y} &= \LR{AX}\odot\LR{BY} \\ \LR{X\odot Y}\LR{A\odot B} &= \LR{XA}\odot\LR{YB} \\ }$$ And there is a rule for dense matrices and the cartesian basis vectors $\{e_k\}$ $$\eqalign{ \LR{A\odot B}e_k &= \LR{Ae_k}\odot\LR{Be_k} \\ e_k^T\LR{A\odot B} &= \LR{e_k^TA}\odot\LR{e_k^TB} \\ }$$ And there are several rules for when all of the variables are vectors $$\eqalign{ \LR{a\odot b}\LR{x\odot y}^T &= \LR{ax^T}\odot\LR{by^T} = \LR{ay^T}\odot\LR{bx^T} \\ \LR{a\odot b}^T\LR{x\odot y} &= a^T\LR{b\odot x\odot y} = {\tt1}^T\LR{a\odot b\odot x\odot y} \\ }$$ But alas, no rule when all of the variable are dense matrices.