Clarification question on applying divergence theorem to $\nabla \cdot (u \nabla u)$ on a compact manifold without boundary

35 Views Asked by At

I'm following the solutions given in this post.

Basically, they are trying to prove that the kernel of the Laplace operator on a compact manifold without boundary is just made up of constant functions.

In the second answer (the one by Robert Lewis), he integrates $\nabla \cdot (u \nabla u) = \langle\nabla u, \nabla u\rangle + u \nabla ^2 u$ over the manifold and applies the divergence theorem to the left hand side to get

$\int _M \langle\nabla u, \nabla u\rangle dV = -\int_M u \nabla ^2 u \mbox{ dV}$.

When I tried applying the divergence theorem I got

$\int _{\partial D} \vec{n} \cdot u \nabla u \mbox{ dS} = \int _M \langle\nabla u, \nabla u\rangle dV + \int_M u \nabla ^2 u \mbox{ dV}$.

I know that I need $\int _{\partial D} \vec{n} \cdot u \nabla u \mbox{ dS} =0$ but I'm unsure of how to reason this given that there is no boundary.This is probably a silly question, but can someone please explain this point further? The rest of the solution from the post I'm referencing absolutely made sense. This is just where I was stuck. Thanks!