Show that $\operatorname{div} X = - \delta X^\flat$

225 Views Asked by At

I want to show the equality $\operatorname{div} X = -\delta X^\flat$, where $X \in \Gamma(TM)$ and $M$ is some Riemannian manifold with metric tensor $g_{ij}$. If I'm not mistaken it holds for the Riemannian connection. Okay, let $X =X^k e_k$, then $$ \nabla_i (X^k e_k) = \frac{\partial X^k}{\partial x^i}e_k+X^j \Gamma^k_{ij} e_k, \\ \operatorname{div} X= \frac{\partial X^i}{\partial x^i}+X^j \Gamma^i_{ij}, \\ \Gamma^i_{ij} = \frac 1 2 g^{ik}(\partial_i g_{kj} + \partial_j g_{ki} - \partial_k g_{ij}). $$ On the other hand $$ X^\flat = g_{ij}X^j \theta^i, \\ \nabla_k X^\flat = g_{ij} \nabla_k (X^j \theta^i) = g_{ij} \frac{\partial X^j}{\partial x^k} \theta^i - g_{sj}X^j \Gamma^s_{ki} \theta^i, \\ \delta X^\flat=-g^{ki}\left( g_{ij} \frac{\partial X^j}{\partial x^k} - g_{sj}X^j \Gamma^s_{ki} \right) = -\frac{\partial X^i}{\partial x^i} + g^{ki}g_{sj} X^j \Gamma^s_{ki}. $$ We can see that $\operatorname{div} X = -\delta X^\flat$ holds iff $$ X^j \Gamma^i_{ij} = -g^{ki}g_{sj}X^j \Gamma^s_{ki}. $$ Now recall that $$ \Gamma^s_{ki} = \frac 1 2 g^{ls}( \partial_{k} g_{li}+\partial_i g_{lk} - \partial_l g_{ki}), \\ g_{sj}\Gamma^s_{ki}=\frac 1 2 (\partial_k g_{ji}+\partial_i g_{jk}-\partial_j g_{ki}), \\ g^{ki}g_{sj}\Gamma^s_{ki} = \frac 1 2 g^{ki}(\partial_k g_{ji}+\partial_i g_{jk}-\partial_j g_{ki}). $$ Hence the equality $\operatorname{div} X = -\delta X^\flat$ holds if $$ g^{ik}(\partial_i g_{kj} + \partial_j g_{ki} - \partial_k g_{ij}) = g^{ik}(-\partial_k g_{ji}-\partial_i g_{jk}+\partial_j g_{ki}), \\ g^{ik} \partial_i g_{kj} = -g^{ik} \partial_i g_{jk}. $$ But the equality $g^{ik} \partial_i g_{kj} = -g^{ik} \partial_i g_{jk}$ is very unreal. Please, tell me, where is the problem?

1

There are 1 best solutions below

1
On BEST ANSWER

It seems that you are using $\delta \alpha = -g^{ij} \nabla_j\alpha_i$ as your definition. So your question is really why $\nabla_i X^i = g^{ij} \nabla _i X_j$, or in general why

$$\nabla_i X^k = g^{jk} \nabla _i X_j\ .$$

There are several ways to answer your question.

The first one is the laziest one (and the most useful one). Because both sides of your equation are independent of coordinate, it suffices to check the equation using any one coordinate system at any point $x$. In particular, we use normal coordinate such that $$g_{ij}=g^{ij} = \delta _{ij}\ ,\ \text{ and }\ \ \Gamma_{ij}^k = 0 = \partial_k g_{ij}\ \ \text{at }x\ .$$

Then

$$ g^{jk} \nabla_i X_j = g^{jk} \nabla_i \big( g_{jl} X^l\big) = \frac{\partial X^k}{\partial x^i} = \nabla_i X^k\ \ \ \text{at }x\ .$$

The second method is to compute directly:

$$g^{jk} \nabla_i X_j = g^{jk} (X_{j, i} - \Gamma_{ij}^l X_l) = g^{jk} \big((g_{jm}X^m)_i - \Gamma_{ij}^l g_{lm}X^m\big)$$ which is

$$g^{jk} \nabla_i X_j = X^k_{\ ,i} + g^{jk}X^m \big(g_{jm,i} - \Gamma_{ij}^l g_{lm} \big)\ .$$

Using the definition of $\Gamma$,

$$g_{jm,i} - \Gamma_{ij}^l g_{lm} = g_{jm,i} - \frac{1}{2} g^{ln} \big( g_{jn, i} + g_{ni, j} - g_{ij,n} \big)g_{lm} = \frac{1}{2}\big(g_{ij,m} + g_{jm,i} -g_{mi,j}\big)$$

Hence $g^{jk} \nabla_i X_j = X^k_{\ ,i} + \Gamma_{im}^k X^m = \nabla_i X^k$.

The last one is conceptual. The process of raising or lowering indices can be thought of a composition of two operation. First you tensor $X$ with the metric $g$ (which is a $(0,2)$ tensor) to form a $(1, 2)$-tensor

$$ (g\otimes X)^k_{\ ij} = g_{ij} X^k\ ,$$

follow by a contraction $C$ (summing one upper indices and lower indices). Thus

$$(X^b)_i = C(g\otimes X)_i= g_{ij}X^j \ .$$

As $\nabla$ commute with any contraction and $\nabla g=0$ (metric condition),

$$\nabla X^b = \nabla \big(C (g\otimes X)\big) = C\nabla(g\otimes X) = C\big( \nabla g \otimes X + g\otimes \nabla X\big) = C\big(g\otimes \nabla X\big)\ .$$

This is a coordinate free expression of the equation

$$\nabla_i X_j = g_{jk} \nabla_i X^k \Leftrightarrow g^{jk}\nabla_i X_j = \nabla_i X^k\ .$$