$$ \newcommand{\mb}[1]{\mathbf{#1}} $$
Let $\mb{B}(\mb{x})$ be a matrix ($m \times n$) which is a function of vector $\mb{x}$ ($n$ elements), and let $\mb{a}$ be a constant column vector ($m$ elements). I want to compute the derivative
$$ \begin{align} \phantom{{}={}} \left( \frac{\partial}{\partial \mb{x}}\left[ \left\{ \mb{B}(\mb{x}) \right\}^T \mb{a} \right] \right) \end{align}. $$
This is a derivative of a vector with respect to a vector which yields a matrix (in the form of a Jacobian matrix).
However, there is no compact, matrix-based notation for this matrix. I only managed to provide a solution for the rows of the matrix:
$$ \begin{align} &\phantom{{}={}} \left( \frac{\partial}{\partial \mb{x}}\left[ \left\{ \mb{B}(\mb{x}) \right\}^T \mb{a} \right] \right)_{i,*} \nonumber \\ %==================== & = \frac{\partial}{\partial \mb{x}}\left( \left[ \mb{B}(\mb{x}) \right]^T \mb{a} \right)_{i}\\ %==================== & = \frac{\partial}{\partial \mb{x}}\left( \left[ \mb{b}_{i}(\mb{x}) \right]^T \mb{a} \right)\\ %==================== & = \mb{a}^T \left( \frac{\partial}{\partial \mb{x}}\mb{b}_{i}(\mb{x}) \right). \end{align} $$
where $\mb{b}_i$ is the column vector $i$ of $\mb{B}$.
Is there some compact matrix notation for the result? I thought of some operator similar to the Kronecker product. It would be even better if something would be known about the properties of such an operator (as I have to further explore such matrices). Any ideas? Thanks for your help!
$ \def\bbR#1{{\mathbb R}^{#1}} \def\o{{\tt1}}\def\p{\partial}\def\E{{\cal E}} \def\L{\left}\def\R{\right}\def\LR#1{\L(#1\R)} \def\Diag#1{\operatorname{Diag}\LR{#1}} \def\trace#1{\operatorname{Tr}\LR{#1}} \def\qiq{\quad\implies\quad} \def\grad#1#2{\frac{\p #1}{\p #2}} \def\G{\grad{c}{x}} $Assume that one knows how to calculate the component-wise matrix-valued gradients $$G_k = \grad{B}{x_k}\;\in\bbR{m\times n}$$ and is tasked with calculating the gradient of the vector $$c = B^Ta$$ The component-wise calculation is straightforward $$\eqalign{ \grad{c}{x_k} &= G_k^Ta \\ }$$ Multiply by the $\{e_k\in\bbR n\}$ standard basis vectors and sum to recover an expression for the full gradient $$\eqalign{ \grad{c}{x} &= \LR{\sum_{k=1}^n G_k^T\,ae_k^T} \;\in\bbR{n\times n} \\ }$$ That's about as much as one can say about the matter. However, if you tell us more about the functional dependence of $B(x),\,$ then perhaps a simpler expression can be found.
For example:
$\quad$if $B=xy^T,\;$ then $\G=ya^T$
$\quad$if $B=\Diag{x},\;$ then $\G=\Diag{a}$
$\quad$if $B=Mxx^TN,\;$ then $\G=\LR{a^TMx}N^T + N^Txa^TM$