The other day, I was at my exam, and I had the following exercise:
Given a vector field, $\mathbf F(x, y, z)$, calculate the following integral over a surface $S$: $$ \iint_{S} (\nabla \times \mathbf F) \cdot d\mathbf S $$ My first intuition, was to use Stoke's Theorem and solve it using a line integral. The result turns out to be $0$. So then I thought: if you apply Divergences theorem you'd get the same result, because: $$ \int_{C} \mathbf F \cdot d\mathbf r = \iint_{S} (\nabla \times \mathbf F) \cdot d\mathbf S = \iiint_W \nabla \cdot \mathbf (\nabla \times \mathbf F) \; dV $$ But $\nabla \cdot \mathbf (\nabla \times \mathbf F) = 0$, then: $$ \int_{C} \mathbf F \cdot d\mathbf r = \iint_{S} (\nabla \times \mathbf F) \cdot d\mathbf S = \iiint_W 0 \;dV = 0 $$ But, it is well-known that not all line integrals are zero. So, this is a mere coincidence or has a more deep meaning. I understand this might be because the region in which this integrals are defined. But can someone explain me this in depth, why: $$ \int_{\partial^2 W} \mathbf F \cdot d\mathbf r = \iint_{\partial W} (\nabla \times \mathbf F) \cdot d\mathbf S = \iiint_W \nabla \cdot (\nabla \times \mathbf F) \;dV = 0 $$ not always works?