For an equation like this:
$$ M= \left(\begin{array}{c} A\:x-B \otimes y\end{array}\right) \oslash C $$
where $\otimes$ is a Hadamard element-wise multiplication and $\oslash$ is a Hadamard element-wise division.
A, B and C are real matrices with (n×m), (n×k) and (n×k), respectively.
Whereas, x and y are vectors with (m×1) and (n×1), respectively.
I need to find the derivative of $M$ with respect to the vector $x$
Note that symbols $\otimes$ and $\oslash$ represents the Hadamard product and division which are built into certain programming languages under various names. Sometimes its called broadcast multiplication or array multiplication. See reference
Here is a simple pseudo code in MATLAB to illustrate array multiplication (broadcast multiplication).
n = 4;
m = 3;
k = 2;
A = sym('A', [n m]);
B = sym('B', [n k]);
C = sym('C', [n k]);
x = sym('x', [m 1]);
y = sym('y', [n 1]);
M = (A*x-B.*y)./C
The question is, what is the derivative of $M$ with respect to the vector $x$
Modern computer matrix languages (Matlab, Julia, etc) do indeed use broadcasting when asked to perform element-wise multiplication on matrices whose sizes are incompatible, but I think that long-term exposure to such languages has dulled your math instincts because $\ldots$
$\ldots$ your equation does not make any sense mathematically. Here is a modified equation with the broadcasting operations shown explicitly: $$\eqalign{ {\tt1_k} &\in{\mathbb R}^{k\times 1} \quad&\big({\rm all\,ones\,vector}\big) \\ C\otimes D = {\tt 1_{nk}} &\in{\mathbb R}^{n\times k} \;&\big({\rm all\,ones\,matrix}\big) \\ M &= \big(Ax{\tt1_k}^T\;+&B\otimes y{\tt1_k}^T\big)\otimes D }$$ The advantage of using the matrix $D$ instead of $C$ is that operands of a Hadamard product commute, while those of Hadamard division do not.
To proceed, calculate the SVD of $A$ and substitute it into the definition of $M$. $$\eqalign{ A &= \sum_{i=1}^{\rm rank(A)} \sigma_i u_iv_i^T \\ M &= \big(D\otimes B\otimes y{\tt1_k}^T\big) + D\otimes\sum_{i=1}^{\rm rank(A)} \sigma_iu_i \color{red}{v_i^Tx}{\tt1_k}^T \\ &= E + \sum_{i=1}^{\rm rank(A)}\Big(D\otimes \sigma_iu_i{\tt1_k}^T\Big) \star(v_i^Tx) \\ &= E + \sum_{i=1}^{\rm rank(A)}\big(F_i\big)\star(v_i^Tx) \\ }$$ Then calculate the differential, and then the gradient.
$$\eqalign{ dM &= \sum_{i=1}^{\rm rank(A)}\big(F_i\big)\star(v_i^Tdx) \\ \Gamma = \frac{\partial M}{\partial x} &= \sum_{i=1}^{\rm rank(A)} F_i\star v_i^T \\ }$$ The $\star$ denotes a tensor/dyadic product, and this gradient is a third-order tensor whose $(p,q,r)^{th}$ component is found by contracting with the $(p^{th},q^{th},r^{th})$ vectors from the standard basis. $$\eqalign{ &e_p \in {\mathbb R}^{n\times 1} \qquad e_q \in {\mathbb R}^{k\times 1} \qquad e_r \in {\mathbb R}^{m\times 1} \\ &\Gamma_{pqr}=\frac{\partial M_{pq}}{\partial x_r} = \sum_{i=1}^{\rm rank(A)} (e_p^TF_ie_q)\star(v_i^Te_r) \\ \\ }$$ Another way to handle this gradient is to vectorize the matrices (by column-stacking) $$\eqalign{ m &= {\rm vec}(M),\quad f_i &= {\rm vec}(F_i) \\ }$$ Now you can write the differential as a vector, and the gradient as an ordinary matrix. $$\eqalign{ dm &= \sum_{i=1}^{\rm rank(A)} f_i v_i^Tdx \\ G = \frac{\partial m}{\partial x} &= \sum_{i=1}^{\rm rank(A)} f_i v_i^T \\ G_{sr} = \frac{\partial m_s}{\partial x_r} &= \sum_{i=1}^{\rm rank(A)} e_s^T\big(f_iv_i^T\big)e_r\qquad \Big(m,f_i,e_s\in{\mathbb R}^{nk\times 1}\Big) \\ }$$ But I wonder why you are interested in this gradient. My guess is that you actually have no interest whatsoever. Rather you think you need it because it appears as an intermediate quantity when the chain rule is applied to a larger problem. If so, please be aware that there are simpler ways to approach such problems which do not require any third or fourth order tensor calculations.
Update
It occurred to me that the above gradients can be written in a form which doesn't require the calculation of the SVD of $A$: $$\eqalign{ G &= \Big(\big(D^T\boxtimes{\tt1_n}\big) \otimes \big({\tt1_k}\boxtimes {\tt I_n}\big)\Big) A \\ \Gamma_{pqr} &= \big(e_p\otimes De_q\big)^T Ae_r \\ }$$ where ${\tt I_n}\in{\mathbb R}^{n\times n}$ is the identity matrix and $\boxtimes$ denotes the Kronecker product, because we've previously used the symbol $\otimes$ to represent the Hadamard product.