I refer to the vectorization $\mathrm{vec}$ of a matrix $A\in\mathbb{R}^{n\times n}$ as the vector in $\mathbb{R}^{n^2}$ with the columns of $A$ stacked one on top of each other.
I have a function $g(x,A) = (A^T\otimes CA)x$, for a fixed matrix $C\in\mathbb{R}^{n\times n}$, and with $x\in\mathbb{R}^{n^2}$. I want to compute the derivative of $g(x,A)$ with respect to $\mathrm{vec}(A)$. I expect this to be a matrix of size $n^2\times n^2$ of course.
If instead of one of the two $A$s I had a $B\in\mathbb{R}^{n\times n}$, I would know what to do since, for example for $h(x,A)=(A^T\otimes CB)x$ I know $$ g(x,A)=\mathrm{vec}(CB\mathrm{Mat}(x)A)=(I_n\otimes CB\mathrm{Mat}(x))\mathrm{vec}(A) $$ and hence the derivative is simply given by $(I_n\otimes CB\mathrm{Mat}(x))$. A similar reasoning applies when I fix the first $A$. Here by $\mathrm{Mat}$ I intend the opposite of $\mathrm{vec}$, i.e. $\mathrm{vec}(\mathrm{Mat}(x))=x$.
To differentiate the whole of $g(x,A)$ I would be tempted to sum these two contributions obtained rewriting $g(x,A)$ in two different and convenient ways. I have however the worry this is not correct.
How can I think about solving this problem?