I am trying to learn more about covariant differentiation. I'm specifically interested in physics applications, but I found this nice exercise in Misner, Thorne, and Wheeler's book Gravitation that I thought would help me get more familiar with the concept. It's purely math. Specifically, the task is to prove the product rule for covariant differentiation.
Let $\text{S}^{\alpha\beta}_{\gamma}$ be a $(2,1)$ tensor field, and let $\text{M}^{\gamma}_{\beta}$ be a $(1,1)$ tensor field. By contracting these tensor fields, one obtains the vector field $\text{S}^{\alpha\beta}_{\gamma}\text{M}_{\beta}^\gamma$. The divergence of this vector field reads \begin{equation} (\text{S}^{\alpha\beta}_{\gamma}\text{M}_{\beta}^\gamma)_{;\alpha}=\text{S}^{\alpha\beta}_{\gamma;\alpha}\text{M}_{\beta}^\gamma+\text{S}^{\alpha\beta}_{\gamma}\text{M}_{\beta;\alpha}^\gamma \end{equation} Verify the validity of this product rule by expressing both sides of the equation in terms of directional derivatives plus connection-coefficient corrections.
Here's what I did,
I started with the left hand side of the equation. Note, I use the Einstein summation convention.
LHS: \begin{align*} (\text{S}^{\alpha\beta}_{\gamma}\text{M}_{\beta}^\gamma)_{;\alpha}&=\partial_{\alpha}(\text{S}^{\alpha\beta}_{\gamma}\text{M}_{\beta}^\gamma)+\Gamma^{\alpha}_{\lambda\alpha}\text{S}^{\lambda\beta}_{\gamma}\text{M}_{\beta}^{\gamma}\\&=(\partial_{\alpha}\text{S}^{\alpha\beta}_{\gamma})\text{M}_{\beta}^\gamma + \text{S}^{\alpha\beta}_{\gamma}(\partial_{\alpha}\text{M}_{\beta}^\gamma) + \Gamma^{\alpha}_{\lambda\alpha}\text{S}^{\lambda\beta}_{\gamma}\text{M}_{\beta}^{\gamma} \end{align*} I then decided to do some work on the right-hand side.
RHS: \begin{align*} \text{S}^{\alpha\beta}_{\gamma;\alpha}\text{M}_{\beta}^\gamma+\text{S}^{\alpha\beta}_{\gamma}\text{M}_{\beta;\alpha}^\gamma&=\left((\partial_{\alpha}\text{S}^{\alpha\beta}_{\gamma})\text{M}_{\beta}^\gamma+\Gamma_{\alpha\lambda}^{\alpha}\text{S}^{\lambda\beta}_{\gamma}\text{M}_{\beta}^{\gamma}+\Gamma^{\beta}_{\alpha\lambda}\text{S}^{\alpha\lambda}_{\gamma}\text{M}_{\beta}^{\gamma}-\Gamma^{\lambda}_{\alpha\gamma}\text{S}^{\alpha\beta}_{\lambda}\text{M}_{\beta}^{\gamma}\right)\\&\phantom{x}+\left(\text{S}^{\alpha\beta}_{\gamma}(\partial_{\alpha}\text{M}_{\beta}^\gamma)+\Gamma^{\gamma}_{\alpha\lambda}\text{S}^{\alpha\beta}_{\gamma}\text{M}_{\beta}^{\lambda}-\Gamma^{\lambda}_{\alpha\beta}\text{S}^{\alpha\beta}_{\gamma}\text{M}_{\lambda}^{\gamma}\right)\\&=\left((\partial_{\alpha}\text{S}^{\alpha\beta}_{\gamma})\text{M}_{\beta}^\gamma + \text{S}^{\alpha\beta}_{\gamma}(\partial_{\alpha}\text{M}_{\beta}^\gamma) + \Gamma^{\alpha}_{\lambda\alpha} \text{S}^{\lambda\beta}_{\gamma}\text{M}_{\beta}^{\gamma}\right)\\&\phantom{x}+ \left(\Gamma^{\beta}_{\alpha\lambda}\text{S}^{\alpha\lambda}_{\gamma}\text{M}_{\beta}^{\gamma}-\Gamma^{\lambda}_{\alpha\gamma}\text{S}^{\alpha\beta}_{\lambda}\text{M}_{\beta}^{\gamma}+\Gamma^{\gamma}_{\alpha\lambda}\text{S}^{\alpha\beta}_{\gamma}\text{M}_{\beta}^{\lambda}-\Gamma^{\lambda}_{\alpha\beta}\text{S}^{\alpha\beta}_{\gamma}\text{M}_{\lambda}^{\gamma}\right)\\&=\left((\partial_{\alpha}\text{S}^{\alpha\beta}_{\gamma})\text{M}_{\beta}^\gamma + \text{S}^{\alpha\beta}_{\gamma}(\partial_{\alpha}\text{M}_{\beta}^\gamma) + \Gamma^{\alpha}_{\lambda\alpha} \text{S}^{\lambda\beta}_{\gamma}\text{M}_{\beta}^{\gamma}\right)+(0) \end{align*}
It's with the right hand side I got stuck.
I should be able to somehow simplify
\begin{equation} \left(\Gamma^{\beta}_{\alpha\lambda}\text{S}^{\alpha\lambda}_{\gamma}\text{M}_{\beta}^{\gamma}-\Gamma^{\lambda}_{\alpha\gamma}\text{S}^{\alpha\beta}_{\lambda}\text{M}_{\beta}^{\gamma}+\Gamma^{\gamma}_{\alpha\lambda}\text{S}^{\alpha\beta}_{\gamma}\text{M}_{\beta}^{\lambda}-\Gamma^{\lambda}_{\alpha\beta}\text{S}^{\alpha\beta}_{\gamma}\text{M}_{\lambda}^{\gamma}\right) \end{equation}
to yield $0$, but I can't figure out how to do it. This would leave my right-hand and left-hand sides equal and thus completing the proof. But I am stuck on this one step. Any ideas?
My Thoughts:
I'm not sure what kind of manipulation I need to do. Do I need to manipulate indices? That seemed like a possibility. I have seen sometimes you can relabel dummy indices in a way that allows you to rearrange terms. But I can't tell if I can do that here. Another possibility is maybe I need to do some kind of factoring by multiplying by identity and then rearranging to somehow simplify so things cancel. I'm not sure if that's the right idea or even how I would do that, but that seemed like another possibility. Apart from that, I am stumped.
Take a look at the term:
In this case, you change the dummy indices. For example for the third term,
$$\Gamma^{\gamma}_{\alpha\lambda}\text{S}^{\alpha\beta}_{\gamma}\text{M}_{\beta}^{\lambda}$$
Rewrite $\gamma$ as $\lambda$ and $\lambda$ as $\gamma$ gives
$$\Gamma^{\gamma}_{\alpha\lambda}\text{S}^{\alpha\beta}_{\gamma}\text{M}_{\beta}^{\lambda}=\Gamma^{\lambda}_{\alpha\gamma}\text{S}^{\alpha\beta}_{\lambda}\text{M}_{\beta}^{\gamma}$$
and of course that is the same as the second term. Similarly the first term and the fourth term cancel each other.
Why can one change dummy indice? Well, recall that when we have an upper and lower indices, we really mean that we sum over that indice, so for example,
$$ G^\alpha _\alpha = G^1_1 + G^2_2 + \cdots + G^n_n = G^\beta_\beta . $$
You asked about another method to deal with this calculation. Well there is a result in Riemannian geometry (I think that is still true with a puesdo metric) that at each point $x$ one can find a normal neigborhood so that $\Gamma_{\alpha\beta}^\gamma = 0$ at that point. So if you use that coordinate, $;\alpha$ and $\partial_\alpha$ are the same and so your formula is just prodcut rule of two functions (Well, so I am a bit confused that it is called Chain rule instead).