Using 4 by 4 Lorentz boost matrices to verify the tensor transformation law, $T^{\mu'\nu'}=\Lambda^{\mu'}_\alpha\Lambda^{\nu'}_\beta T^{\alpha\beta}$

95 Views Asked by At

In the following question, $K^{\prime}$ is a frame moving in the positive $x$-direction with speed $v$ relative to frame $K$:

A tensor of type $(2,0)$ is $16$ numbers, $T^{\mu\nu}$ with the transformation property $$T^{\mu^\prime\nu^\prime}=\Lambda^{\mu^{\prime}}_\alpha\Lambda^{\nu^\prime}_\beta T^{\alpha\beta}\tag{1}$$ under the Lorentz transformation $x^{\mu^\prime}=\Lambda^{\mu^{\prime}}_\nu x^\nu$

Suppose that in frame $K$, $T^{00} = \alpha$, where $\alpha$ is a constant and the other $15$ components of $T^{\mu\nu}$ are zero. Determine the components $T^{\mu^\prime\nu^\prime}$ in $K^\prime$.

I want to try to answer this using $4\times 4$ Lorentz boost matrix multiplication, even though there is a much faster and simpler method given by the author which will be shown at the end, though I do not understand that method given by the author.


Since the $4\times 4$ Lorentz boost matrix is $\Lambda^{\mu^{\prime}}_\alpha=\begin{pmatrix}\gamma & -\beta\gamma & 0 & 0\\-\beta\gamma & \gamma & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{pmatrix}$ along the $x$-direction.

Then multiplying two such boosts as in $(1)$, $\Big(\Lambda^{\mu^{\prime}}_\alpha$ and $\Lambda^{\nu^\prime}_\beta\Big)$ in $4\times 4$ matrix format eqn $(1)$ should read, $$\begin{align}T^{\mu^\prime\nu^\prime}&=\begin{pmatrix}\gamma & -\beta\gamma & 0 & 0\\-\beta\gamma & \gamma & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{pmatrix}\begin{pmatrix}\gamma & -\beta\gamma & 0 & 0\\-\beta\gamma & \gamma & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{pmatrix}\begin{pmatrix}\alpha & 0 & 0 & 0\\0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\end{pmatrix}\\&=\alpha\begin{pmatrix}\gamma^2(1+\beta^2) & -2\beta\gamma^2 & 0 & 0\\-2\beta\gamma^2 & \gamma^2(1+\beta^2) & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{pmatrix}\begin{pmatrix}1 & 0 & 0 & 0\\0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\end{pmatrix}\\&=\alpha\gamma^2\begin{pmatrix}1+\beta^2 & 0& 0 & 0\\-2\beta & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\end{pmatrix}\tag{2}\end{align}$$ The calculation was performed using wolframalpha.com, where $\beta=\dfrac{v}{c}$ and $\gamma=\left(1-v^2/c^2\right)^{-1/2}$

Addressing the question, the only non-zero components of $T^{\mu^\prime\nu^\prime}$ are $$T^{0^\prime 0^\prime}=\alpha\gamma^2(1+\beta^2)=\alpha\gamma^2\left(1+\frac{v^2}{c^2}\right)$$ and $$T^{0^\prime 1^\prime}=-2\beta\alpha\gamma^2=-2\frac{\gamma^2 v \alpha}{c}$$


Now the problem is that according to the authors solution my answer above is wrong and the correct solution is

As the only non-zero component of $T^{\mu\nu}$ is $T^{00}=\alpha$ we have $$T^{\mu^\prime\nu^\prime}=\Lambda^{\mu^{\prime}}_0\Lambda^{\nu^\prime}_0 T^{00}=\Lambda^{\mu^{\prime}}_0\Lambda^{\nu^\prime}_0 \alpha\tag{3}$$ Accordingly, $$T^{0^\prime 0^\prime}=\Big(\Lambda^{0^{\prime}}_0\Big)^2\alpha=\gamma^2\alpha,$$ $$T^{0^\prime 1^\prime}=T^{1^\prime 0^\prime}=\Lambda^{0^{\prime}}_0\Lambda^{1^{\prime}}_0\alpha=-\frac{\gamma^2 v \alpha}{c},$$ $$T^{1^\prime 1^\prime}=\Big(\Lambda^{1^{\prime}}_0\Big)^2\alpha=\frac{\gamma^2 v^2\alpha}{c^2}$$


Looking at the authors' solution, it's clear that he/she is explicitly writing the matrix components of $\Lambda^{\mu{^\prime}}_{\alpha}$ such that $$\Lambda^{\mu^\prime}_\alpha=\Lambda^{\nu^\prime}_\beta=\begin{pmatrix}\Lambda^{0^\prime}_0 & \Lambda^{0^\prime}_1 & \Lambda^{0^\prime}_2 & \Lambda^{0^\prime}_3 \\ \Lambda^{1^\prime}_0 & \Lambda^{1^\prime}_1 & \Lambda^{1^\prime}_2 & \Lambda^{1^\prime}_3 \\ \Lambda^{2^\prime}_0 & \Lambda^{2^\prime}_1 & \Lambda^{2^\prime}_2 & \Lambda^{2^\prime}_3 \\ \Lambda^{3^\prime}_0 & \Lambda^{3^\prime}_1 & \Lambda^{3^\prime}_2 & \Lambda^{3^\prime}_3\end{pmatrix}$$

In fact, just by inspection, the correct matrix that will give the same components for $T^{\mu^\prime\nu^\prime}$ as the authors' is $$T^{\mu^\prime\nu^\prime}=\alpha\gamma^2\begin{pmatrix}1 & -\beta & 0 & 0\\-\beta & \beta^2 & 0 & 0\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\end{pmatrix}\tag{4}$$ but how to obtain this matrix is beyond my comprehension.


So my questions are,

  1. Why is the logic I used in eqn $(2)$ not giving me the same answer as the author (I have only $2$ non-zero components, whereas the author has $4$) and how can I obtain the correct matrix form for the components, shown in eqn $(4)$?
  2. How did the author 'know in advance' that $T^{\mu^\prime\nu^\prime}=\Lambda^{\mu^{\prime}}_0\Lambda^{\nu^\prime}_0 T^{00}$ (from eqn $(3)$) would give non-zero components? Put another way, I'm asking how the author knew the lower indices on the $\Lambda$ matrices are zero even though there is a contribution to $T^{0^\prime 1^\prime}=T^{1^\prime 0^\prime}$?
1

There are 1 best solutions below

2
On BEST ANSWER

It looks like you may be misunderstanding the Einstein summation convention or misapplying it. I'll begin with a refresher in case that is helpful. When an index letter is repeated (once downstairs and once upstairs), this indicates a summation over all the values of this repeated index. For example, if we have a vector $V^\alpha$ with components $(V^0, V^1, V^2, V^3)$, and we want to apply a Lorentz boost to it, then we write the boosted vector as $$V^{\mu'} = \Lambda^{\mu'}_{\; \alpha} V^\alpha,$$ with the $\alpha$ index repeated. This is a shorthand for a summation over the values of $\alpha$; that is, $$ V^{\mu'} = \Lambda^{\mu'}_{\; 0} V^0 + \Lambda^{\mu'}_{\; 1} V^1 + \Lambda^{\mu'}_{\; 2}V^2 + \Lambda^{\mu'}_{\; 3} V^3, $$ where, as you noted, we are working with explicit components of $\Lambda$ (and $V$). This is how we usually do things with tensor index notation - we work expressions that are written in terms of the components of the tensors, rather than in terms of the tensors themselves.

However, it can be helpful when learning to know how these relate to the way you may be used to writing such equations. In fact, the operation Einstein summation is a shorthand for is effectively matrix multiplication. The example I gave before could be rewritten as $$ \begin{pmatrix} V^{0'} \\ V^{1'} \\ V^{2'} \\ V^{3'} \end{pmatrix} = \begin{pmatrix} \Lambda^{0'}_{\; 0} & \Lambda^{0'}_{\; 1} & \Lambda^{0'}_{\; 2} & \Lambda^{0'}_{\; 3} \\ \Lambda^{1'}_{\; 0} & \Lambda^{1'}_{\; 1} & \Lambda^{1'}_{\; 2} & \Lambda^{1'}_{\; 3} \\ \Lambda^{2'}_{\; 0} & \Lambda^{2'}_{\; 1} & \Lambda^{2'}_{\; 2} & \Lambda^{2'}_{\; 3} \\ \Lambda^{3'}_{\; 0} & \Lambda^{3'}_{\; 1} & \Lambda^{3'}_{\; 2} & \Lambda^{3'}_{\; 3} \\ \end{pmatrix} \begin{pmatrix} V^{0} \\ V^{1} \\ V^{2} \\ V^{3} \end{pmatrix}, $$ and you should make sure you are convinced that this gives the same components as the earlier expression I gave.

Now, to address the problem at hand - your issue is that you have interpreted the adjacent matrices as being multiplied together. That is, you multiplied $\Lambda^{\mu'}_{\; \alpha}$ with $\Lambda^{\nu'}_{\; \beta}$. This is incorrect. You have to be careful when evaluating an equation written in tensor index notation using matrix multiplication, as it is easy to fall into traps like this.

When multiplying matrices, we write the matrices adjacent to each other to indicate they are multiplied. However, in tensor index notation, matrix multiplication is indicated by a repeated index. You'll note there are no repeated indices between $\Lambda^{\mu'}_{\; \alpha}$ and $\Lambda^{\nu'}_{\; \beta}$; instead, each of them share an index with $T^{\alpha \beta}$. In fact, the correct way to write this equation as a matrix multiplication is in the order $$ T' = \Lambda T \Lambda. $$ where I'm omitting the indices to indicate we are working with the matrices and not the components (technically, I should probably write it as $T' = \Lambda T \Lambda^T$, for reasons I can go into if you would like; however, it doesn't matter here as $\Lambda$ is symmetric). If you try working through this multiplication, you should find you get the same result as the author.

Now I will address your second point, and hopefully once you understand it you will see that the technique used by the author is somewhat more straightforward than the matrix method. The key thing to note is that all the $T^{\alpha \beta}$ components are $0$, except for $T^{00} = \alpha$. This means when we expand out the summations over the $\alpha$ and $\beta$ indices, the only nonzero terms are those with the indices $\alpha, \beta = 0$.

Let's see that explicitly. Expanding the $\alpha$ index: \begin{align} T^{\mu' \nu'} &= \Lambda^{\mu'}_{\; \alpha} \Lambda^{\nu'}_{\; \beta} T^{\alpha \beta} \\ &= \Lambda^{\mu'}_{\; 0} \Lambda^{\nu'}_{\; \beta} T^{0\beta} + \Lambda^{\mu'}_{\; 1} \Lambda^{\nu'}_{\; \beta} T^{1\beta} + \Lambda^{\mu'}_{\; 2} \Lambda^{\nu'}_{\; \beta} T^{2\beta} + \Lambda^{\mu'}_{\; 3} \Lambda^{\nu'}_{\; \beta} T^{3\beta}. \end{align} Now, $T^{0\beta}$ will be nonzero when $\beta = 0$, but $T^{1\beta}$, $T^{2\beta}$, and $T^{3\beta}$ are all zero, no matter what $\beta$ is. This means the last 3 terms of our equation are zero, and so we can write $$ T^{\mu' \nu'} = \Lambda^{\mu'}_{\; 0} \Lambda^{\nu'}_{\; \beta} T^{0\beta}. $$ Now, we can apply the same approach to expanding the $\beta$ index: \begin{align} T^{\mu' \nu'} &= \Lambda^{\mu'}_{\; 0} \Lambda^{\nu'}_{\; \beta} T^{0\beta} \\ &= \Lambda^{\mu'}_{\; 0} \Lambda^{\nu'}_{\; 0} T^{00} + \Lambda^{\mu'}_{\; 0} \Lambda^{\nu'}_{\; 1} T^{01} + \Lambda^{\mu'}_{\; 0} \Lambda^{\nu'}_{\; 2} T^{02} + \Lambda^{\mu'}_{\; 0} \Lambda^{\nu'}_{\; 3} T^{03} \\ &= \Lambda^{\mu'}_{\; 0} \Lambda^{\nu'}_{\; 0} \alpha \qquad \text{(as $T^{00} = \alpha$ and $T^{01}, T^{02}, T^{03}$ are zero)} \end{align} This is how the author arrived at Eq. (3).

We've managed to simplify our expression for the components of $T'$ to $$T^{\mu' \nu'} = \Lambda^{\mu'}_{\; 0} \Lambda^{\nu'}_{\; 0} \alpha.$$ Now, think about which components $\Lambda^{\mu'}_{\; 0}$ are nonzero. Referring to the matrix you wrote in your question, we can see that $\Lambda^{0'}_{\; 0} = \gamma$, and $\Lambda^{1'}_{\; 0} = - \beta \gamma$, but $\Lambda^{\mu'}_{\; 0} = 0$ for $\mu = 2, 3$. The same holds for $\Lambda^{\nu'}_{\; 0}$. Thus, the only nonzero components $T^{\mu' \nu'}$ will be those where we have $\mu$ and $\nu$ equal to $0$ or $1$, as otherwise at least one of the $\Lambda$ components is zero; eg. $$ T^{1'2'} = \Lambda^{1'}_{\; 0} \Lambda^{2'}_{\; 0} \alpha = - \beta \gamma \cdot 0 \cdot \alpha = 0. $$ This is how the author arrives at $T^{0'0'}, T^{1'0'}, T^{0'1'}, T^{1'1'}$ being the only nonzero indices. You can check their expressions yourself by substituting the components into Eq. (3).

TLDR:

  1. When writing as a matrix multiplication, it should be $T' = \Lambda T \Lambda$, while you have evaluated $T' = \Lambda \Lambda T$.
  2. If you expand out the summations, the only nonzero terms are those where $\Lambda$ has a zero lower index.

Let me know if you would like me to go into more detail on any of the points above.