Why is $R_{\rho \mu} = \eta^{\sigma \nu} R_{\rho \sigma \mu \nu}$?

192 Views Asked by At

Given the tetrad basis $\{(e_{\mu})\}$, i.e. smooth vector fields for which:

$$ (e_{\mu})^a (e_{\nu})_a = \eta_{\mu \nu} $$

we wish to find the components of the Ricci tensor in this basis, in terms of the Riemann tensor components (in the same basis), given by:

$$ R_{\rho \sigma \mu \nu} = R_{abcd} (e_{\rho})^a (e_{\sigma})^b (e_{\mu})^c (e_{\nu})^d $$

(For full clarity, abstract index notation is used for Latin indices. Also, I see the index of the basis vector fields as an indication of which basis vector is referred to, NOT as a component.) The book states, without derivation, that the components are given by:

$$ R_{\rho \mu} = \eta^{\sigma \nu} R_{\rho \sigma \mu \nu} $$

I tried deriving this out of curiosity but didn’t succeed:

$$ R_{abcd} = R_{\rho \sigma \mu \nu} (e_{\rho})_a (e_{\sigma})_b (e_{\mu})_c (e_{\nu})_d $$ $$ R_{ac} = R_{abc}^b = R_{\rho \sigma \mu \nu} (e_{\rho})_a (e_{\sigma})_b (e_{\mu})_c (e_{\nu})^b = \eta^{\sigma \nu} R_{\rho \sigma \mu \nu} (e_{\rho})_a (e_{\mu})_c $$

Of course, repeated Greek indices (referring to components) are summed over. The issue is, that when I try to insert $(e_{\alpha})^a$ and $(e_{\beta})^b$ into the tensor to decide the components, I run into an issue:

$$ R_{\alpha \beta} = R_{ac} (e_{\alpha})^a (e_{\beta})^c = R_{\rho \sigma \mu \nu} \eta^{\sigma \nu} \eta^{\rho}_{\alpha} \eta^{\mu}_{\beta} $$

The problem here is that $\eta$ can be both $\pm 1$, meaning that the Ricci component might be the negative of the equation for certain indices. I’m struggling to find where my thinking goes wrong. It seems to me that since the basis doesn’t fulfill the demand $(e_{\mu})_a (e_{\nu})^a = \delta_{\mu \nu}$, this problem always arises when inserting basis vectors to get components?

Any help is appreciated! This has been bothering me for a while.

2

There are 2 best solutions below

9
On BEST ANSWER

Write $e_\rho{}^a$ for the basis vectors and $e^\rho{}_a$ for the basis covectors. Note that $R_{abcd}$ is four times covariant, so it should be a linear combination of (tensor producs of) the basis covectors: \begin{align} R_{abcd} &= R_{\rho\sigma\mu\nu}e^\rho{}_ae^\sigma{}_be^\mu{}_ce^\nu{}_d. \end{align} Meanwhile, $R^d{}_{abc}$ has expansion \begin{align} R^d{}_{abc} &= g^{xd}R_{xabc} \\ &= g^{\alpha\beta} R_{\rho\sigma\mu\nu} e_\alpha{}^x e_\beta{}^d e^\rho{}_x e^\sigma{}_a e^\mu{}_b e^\nu{}_c \\ &= g^{\alpha\beta} R_{\rho\sigma\mu\nu} \delta_\alpha^\rho e_\beta{}^d e^\sigma{}_a e^\mu{}_b e^\nu{}_c \\ &= g^{\rho\beta} R_{\rho\sigma\mu\nu} e_\beta{}^d e^\sigma{}_a e^\mu{}_b e^\nu{}_c. \end{align} (In particular its components are $R^\beta{}_{\sigma\mu\nu}=g^{\rho\beta} R_{\rho\sigma\mu\nu}$, but we don't need this). Now to contract, set $d=b$ so that \begin{align} R_{ac}=R^b{}_{abc} &= g^{\rho\beta} R_{\rho\sigma\mu\nu} e_\beta{}^b e^\sigma{}_a e^\mu{}_b e^\nu{}_c \\ &= g^{\rho\beta} R_{\rho\sigma\mu\nu} \delta_\beta^\mu e^\sigma{}_a e^\nu{}_c \\ &= g^{\rho\mu} R_{\rho\sigma\mu\nu} e^\sigma{}_a e^\nu{}_c. \end{align} This works for any basis. In particular, for a tetrad you get $g^{\rho\mu}=\eta^{\rho\mu}$.

3
On

As commented, this is all by definition. The summation of the inverse Minkowski metric with the fourth lower index of Riemann over sigma results in a raised index: $$\eta^{\sigma \nu} R_{\rho \sigma \mu \nu} \equiv R_{\rho \sigma \mu}^{\sigma}$$ No need to mess with inputing vectors and covectors into the tensors; summations with the metric and inverse metric are compacted into this notational process of raising and lowering indices.

Then, the Ricci tensors components are defined to be the Riemann tensor with a summation over the top and middle lower index (it is like a trace of the Riemann tensor): $$R_{ab} \equiv R_{a c b}^c$$ Hence $$R^{\sigma}_{\rho \sigma \mu} = R_{\rho \mu}$$ In fact, the refrence to any specific tetrad basis is somewhat misleading. Anytime you write down a tensor as a symbol with some upper and/or lower indices, the use of a basis is implicit. In fact, this is the power of tensors, the algebra looks exactly the same no matter the basis. So we usually leave it implicit unless converting between coordinate systems (applying the tensor transformation laws) and simply work purely in components.