I do not want to use index notation.
I want to compute the derivative
$$ D_x (Axx^\top A) = ? $$ where $A$ is an $n\times n$ symmetric matrix and $x$ in a vector in $\mathbb{R}^n$. I tried resources such as the matrix calculus cookbook but they don't deal with scenarios like this: Here the function $f(x) = Axx^\top A$ takes a vector as input and returns a matrix output.
It is possible to express this without using index notation and I want this type of answers. I would like step by step, to figure out how I can go about performing similar calculations in the future.
Attempt
One attempt is using the Frechet derivative definition (I will use the Frobenius norm) $$ \begin{align} \lim_{\|v\|\to 0} \frac{\|A(x+v)(x+v)^\top A - Axx^\top A - Dv\|_F}{\|v\|} &= \lim_{\|v\|\to 0} \frac{\|A(xv^\top +vv^\top + vx^\top)A - Dv\|_F}{\|v\|} \end{align} $$
Let's look at pertubations $$f(x+v) = A(x+v)(x+v)^TA = Axx^TA + Axv^TA + Avx^TA + Avv^TA$$
The derivate is often defined as the unique linear function such that: $$f(x+v) = f(x) + D_{f;x}(v) + \mathcal{o}(v)$$ as $v\rightarrow 0$.
Thus $D_{f;x}: v\mapsto A(xv^T+vx^T)A$ is the linear derivative map. We can express this not as a matrix, but a tensor of 3-rd order (a matrix would be a second order tensor). However, compact expressions for tensors of higher order are elusive and so index notations are used.