I need to solve the following:
$$ \sum_{i,c,d} x_{i,a} x_{i,b} x_{i,c} x_{i,d} W_{c,d} = \sum_{i} x_{i,a} x_{i,b} y_{i} $$
for known x and y, and W is symmetric. It is safe to assume that the elements of x and y are completely random.
If I define
$$ A_{a,b,c,d} = \sum_{i} x_{i,a} x_{i,b} x_{i,c} x_{i,d} $$
$$ B_{c,d} = \sum_{i} x_{i,a} x_{i,b} y_{i} $$
and create auxiliary matrices
$$ W'_{M*a+b} = W_{a,b} $$
$$ A'_{M*a+b,M*c+d} = A_{a,b,c,d} $$
and
$$ B'_{M*c+d} = B_{c,d} $$
I get the equation of the form
$$ \sum_{j} A'_{ij} W'_{j} = B'_{j} $$
which suggests one can solve using
$$ W'_i = A'^{-1}_{ij} B'_j $$
and converting back to the original W matrix. The issue arrises as A' is never invertable (for some reason?). Is there some symmetry of A' which means that it has 0 determinant, and can this be solved by removing the symmetric parts of A' to get A'' which has lower dimensionality? Supposing that M = 6, there are 126 unique elements of A and 21 of B and W. Is there a simple way of converting to different auxiliary matrices which have none of the degeneracy caused by the symmetry in the definitions of A and B?
Intuitively, there are 21 (6+5+4+3+2+1) unique equations and 21 uniquely determined variables so there should be a way to lift the degeneracy as the problem is exactly determined.
$ \def\A{{\large\cal A}} \def\LR#1{\left( #1 \right)} \def\BR#1{\Big[ #1 \Big]} \def\op#1{\operatorname{#1}} \def\vecc#1{\op{vec}\LR{#1}} \def\qiq{\quad\implies\quad} \def\p{\partial} \def\grad#1#2{\frac{\p #1}{\p #2}} \def\c#1{\color{red}{#1}} \def\CLR#1{\c{\LR{#1}}} $Let's use a naming convention wherein an uppercase letter denotes a matrix, lowercase a vector, Greek a scalar, and calligraphic uppercase a tensor. With this convention, your first variable is renamed $\:A\to\A$
Next we need the index mapping for flattening a matrix into vector, i.e. $\:\vecc F=f$ $$\eqalign{ F &\in {\mathbb R}^{m\times n} \implies f \in {\mathbb R}^{mn\times 1} \\ F_{ij} &= f_{a} \\ a &= i+(j-1)\,m \\ i &= 1+(a-1)\,{\rm mod}\,m, \quad j = 1+(a-1)\,{\rm div}\,m \\ }$$ Applying this to the index pairs of a $2k$-order tensor will flattened it to a $k$-order tensor.
Flattening your tensor equation into a matrix equation admits a general solution $$\eqalign{ B &= \A:W \qiq b = Aw \\ \\ w &= A^{\bf+}b \;+\; \LR{I-A^{\bf+}A}q \\ &= A^{\bf+}b \;+\; Pq \\ }$$ where a colon denotes the double-contraction product, $A^{\bf+}$ is the pseudoinverse, $P$ is the nullspace projector, and $q$ is an arbitrary vector.
The Commutation Matrix $(K)$ can be used to enforce the symmetry constraint $$\eqalign{ &\vecc{W} = \vecc{W^T} \doteq K \vecc{W} \\ &w = Kw \\ &\LR{A^+b+Pq} = K\LR{A^+b+Pq} \\ &\CLR{I-K}Pq = \LR{K-I}A^+b \\ &\c{L}Pq = -LA^+b \\ &q = -\LR{LP}^+LA^+b \\ }$$ Substituting this into the previous expression yields $$\eqalign{ w &= A^+b + Pq \\ &= A^+b - P\LR{LP}^+LA^+b \qquad\quad \\ &= \BR{I-P\LR{LP}^+L}\,A^+b \\ }$$ which can be un-flattened to recover the $W$ matrix.