Let $$a = x^tWx$$ where $x \in \mathbb{R}^{m\times1}$ and $W \in \mathbb{R}^{m\times m}$.
Then $$\mathrm{d}a = (\mathrm{d}x^tW)x + x^tW\mathrm{d}x$$ $$\mathrm{d}a = (\mathrm{d}x^t)Wx + x^t(\mathrm{d}W)x + x^tW\mathrm{d}x$$ $$\mathrm{d}a = (\mathrm{d}x)^tWx + x^t(\mathrm{d}W)x + x^tW\mathrm{d}x$$
From this I want to find the following derivatives: $$\frac{\mathrm{d}a }{\mathrm{d} W} = \space ? \space$$ $$\frac{\mathrm{d}a }{\mathrm{d} x} = \space ? \space$$
So I set the other differential to zero for each one and I get:
$$\mathrm{d}a = x^t(\mathrm{d}W)x$$ $$\mathrm{d}a = (\mathrm{d}x)^tWx + x^tW\mathrm{d}x$$
And this is where I get stuck because I don't know how to convert these into the canonical differential form described on Wikipedia on the following link under the section "Conversion from differential to derivative form" so that I could get the derivatives: https://en.wikipedia.org/wiki/Matrix_calculus
My failed attempt:
$$\mathrm{d}a = x^t(\mathrm{d}W)x$$ $$\mathrm{d}a = x^t(x^t(\mathrm{d}W)^t)^t$$
which gets $\mathrm{d}W$ to multiply last on the right, but it complicates things even further with all those transposes.
Similarly, I tried:
$$\mathrm{d}a = (\mathrm{d}x)^tWx + x^tW\mathrm{d}x$$ $$\mathrm{d}a = (x^tW^t\mathrm{d}x)^t + x^tW\mathrm{d}x$$
How can I get from the differential form I have now to the canonical one (namely $da = A \space\mathrm{d}x$) so that I can use that to get the derivative?
For real vectors, the scalar product commutes. $$x^Ty = y^Tx$$ Applying this insight to your vector differential $$\eqalign{ da &= dx^TWx+x^TW\,dx \cr&= x^T(W+W^T)\,dx \cr &= \big((W+W^T)x\big)^Tdx = g^Tdx }$$ So the gradient with respect to $x$ is simply $$g = \frac{\partial a}{\partial x} = (W+W^T)x$$ For real matrices, the scalar/Frobenius product also commutes $$X:Y = Y:X = {\rm tr}(Y^TX)$$ Applying this to your matrix differential $$\eqalign{ da &= x^T\,dW\,x \cr &= xx^T:dW = G:dW \cr }$$ The gradient with respect to $W$ is $$G = \frac{\partial a}{\partial W} = xx^T$$ Note that my vector gradient $(g)$ has the same shape as the vector differential $(dx)$,
while my matrix gradient $(G)$ has the same shape as the matrix differential $(dW)$.
Some people (not me) prefer a convention wherein the gradient has the shape of the transpose of the differential.