What is the following gradient?
$$ \frac{\partial(\mathbf{u}^T (\mathbf{x}) \mathbf{A} \mathbf{u}(\mathbf{x}))}{\partial \mathbf{x}} $$
where $\mathbf{x}$ is a vector and the symmetric matrix $\mathbf{A}$ does not depend on $\mathbf{x}$?
I think it is $2\frac{\partial \mathbf{u}(\mathbf{x})}{\partial \mathbf{x}}\mathbf{A} \mathbf{u}(\mathbf{x})$, but I can't prove it.
Your guess is right up to a transpose. Matrix notation may make things awkward. Let $\langle {\bf v},{\bf w}\rangle \doteq {\bf v}^\top {\bf A} {\bf w}$. So $\langle \cdot,\cdot\rangle$ is bilinear and symmetric, so a product rule will hold. And $f({\bf x}) = \langle {\bf u}({\bf x}), {\bf u}({\bf x})\rangle$. This means that $$Df({\bf x})({\bf v}) = 2\langle D{\bf u}({\bf x})({\bf v}), {\bf u}({\bf x})\rangle = 2{\bf u}({\bf x})^\top {\bf A} D{\bf u}({\bf x})({\bf v}).$$Back to the awkward matrix notation, this should mean that $$\frac{\partial f}{\partial {\bf x}}({\bf x}) = 2{\bf u}({\bf x})^\top {\bf A} \frac{\partial {\bf u}}{\partial {\bf x}}({\bf x}).$$
Another way, of course, is to use the product rule with a naive single-variable calculus mindset $$\frac{\partial}{\partial {\bf x}}({\bf u}({\bf x})^\top {\bf A}{\bf u}({\bf x})) = \frac{\partial {\bf u}}{\partial {\bf x}}({\bf x})^\top {\bf A} {\bf u}({\bf x}) + {\bf u}({\bf x}) {\bf A}\frac{\partial {\bf u}}{\partial {\bf x}}({\bf x}) = 2{\bf u}({\bf x})^\top {\bf A} \frac{\partial {\bf u}}{\partial {\bf x}}({\bf x}), $$since both terms being added are just numbers (hence equal to their transposes), and $\partial/\partial {\bf x}$ commutes with $\top$ because the latter is a linear operator.