Let $\underline{u}$ be a $1$ - order tensor (say a column vector) I want to prove that :
$\underline{\operatorname{div}} \left( (\underline{\underline{\operatorname{grad}}} \, \underline{u})^T\right)= \underline{\operatorname{grad}} \, (\operatorname{div} \underline{u})$
where $\underline{\operatorname{grad}}$ is the one order gradient (the usual one) and $\underline{\underline{\operatorname{grad}}}$ is the second order gradient (it is the jacobian matrix)
I want a proof that those not involve any coordinates. Because it is easy to find a proof using for example cartesian coordinates.
Here is what I've done so far :
Since for any volume $V$ we have : $$\iiint _V \underline{\operatorname{div}} \left( (\underline{\underline{\operatorname{grad}}} \, \underline{u})^T\right) \; \mathrm{d}V = \iint_{S} (\underline{\underline{\operatorname{grad}}} \, \underline{u})^T \cdot \underline{n} \; \mathrm{d}S$$ (it it the definition of $\underline{\operatorname{div}}$) where $\underline{n}$ is the normal vector to the surface $S$ at the limit of the volume $V$.
And we can write : $$\iiint_V \underline{\operatorname{grad}} \, (\operatorname{div} \underline{u}) \; \mathrm{d}V = \iint_S (\operatorname{div} \underline{u} )\underline{n} \; \mathrm{d}S$$
Thus it is sufficient to prove that : $$\iint_{S} (\underline{\underline{\operatorname{grad}}} \, \underline{u})^T \cdot \underline{n} \; \mathrm{d}S =\iint_S (\operatorname{div} \underline{u} )\underline{n} \; \mathrm{d} S$$
The problem is that we don't have $ (\underline{\underline{\operatorname{grad}}} \, \underline{u})^T \cdot \underline{n} = (\operatorname{div} \underline{u} )\underline{n} $
So how can I finish the proof ?
If you need details, please tell me.
Thank you.
This calculation is done component wise, so it isn't most elegant way to show the fact. I would like to see more general approach (This probably needs that the metric tensor is taken account. Or we could switch formalism to k-forms).
Let $A$ be matrix valued function. The i th component of $div(A)$ is given by
$$ div(A)_i = \sum_j \partial_j A_{ij}$$
and the ij component of gradient of vector function $u$ is
$$ grad(u)_{ij} = \partial_j u_i.$$
So the transpose of gradient is $$(grad(u)^{\top})_{ij} = \partial_i u_j. $$
Now, evaluating i th component the left hand side gives:
$$div(grad(u)^{\top})_i = \sum_j \partial_j \partial_i u_j = \partial_i( \sum_j \partial_j u_j ) = \partial_i(div(u)),$$ where we have assumed that $u$ is smooth enough. Now, the i th component of gradient of scalar function f is given by
$$ grad(f)_i = \partial_i f.$$
And we notice that $\partial_i (div(u)) = grad(div(u))_i$ and thus
$$ div(grad(u)^{\top})_i = grad(div(u))_i $$ so $$ div(grad(u)^{\top}) = grad(div(u)) $$.