Let $\Omega \subset \mathbb{R}^n$ be open, $u \in C^1(\Omega)$ and $\phi \in C^{1}_{c}(\Omega, \mathbb{R}^n)$, which denotes the set of continuously differentiable functions $\colon \Omega \to \mathbb{R}^n$ with compact support. Then, apparently
$$ \int_{\Omega}u\operatorname{div} \varphi \,\mathrm{d}x = -\int_{\Omega} \phi \cdot \nabla{u} \,\mathrm{d}x $$
I actually don't see why this holds. There is one property of divergence that says $$\mathrm{div}(u \varphi)=u\mathrm{div}(\varphi)+\nabla u \cdot \varphi \, .$$
But if this proves the equation above, then this would imply that
$$\int_{\Omega} \mathrm{div}(u \varphi) \,\mathrm{d}x= 0$$
This looks like an application of the Divergence Theorem, but I'm afraid I don't see the exact argument.
Start from the product rule $$\nabla\cdot(u\phi) = u\nabla\cdot\phi + \nabla\cdot\phi$$ and integrate this over an open, bounded region $\Omega\subset\mathbb{R}^n$ with $C^1$ boundary $\partial\Omega$ to obtain $$\int_\Omega\nabla\ \cdot(u\phi)~\mathrm{d}x = \int_\Omega u\nabla\cdot\phi~\mathrm{d}x + \int_\Omega\nabla u\cdot\phi~\mathrm{d}x.$$ We now apply the Divergence Theorem to the left-hand side to obtain $$\int_\Omega\nabla\ \cdot(u\phi)~\mathrm{d}x = \int_{\partial\Omega} u\phi\cdot\hat{n}~\mathrm{d}S.$$ Substituting this into the first equality yields $$\int_{\partial\Omega} u\phi\cdot\hat{n}~\mathrm{d}S = \int_\Omega u\nabla\cdot\phi~\mathrm{d}x + \int_\Omega\nabla u\cdot\phi~\mathrm{d}x,$$ or, equivalently, $$\int_\Omega u\nabla\cdot\phi~\mathrm{d}x = -\int_\Omega\nabla u\cdot\phi~\mathrm{d}x + \int_{\partial\Omega} u\phi\cdot\hat{n}~\mathrm{d}S.$$ If $\phi$ has compact support on $\Omega$, then $\phi\cdot\hat{n}$ is identically zero on $\partial\Omega$, so this entire surface integral is zero and we are left with $$\int_\Omega u\nabla\cdot\phi~\mathrm{d}x = -\int_\Omega\nabla u\cdot\phi~\mathrm{d}x,$$ as desired.