Tensor and Vector Notation

117 Views Asked by At

I'm given the tensor $X^{\mu\nu}$ and vector $V^\mu$ of the form $$X^{\mu\nu} = \begin{bmatrix} 2 & 0 & 1 & -1 \\ -1 & 0 & 3 & 2 \\ -1 & 1 & 0 & 0 \\ 2 & 1 & 1 & 2 \end{bmatrix},\quad V^{\mu} = (-1,2,0,-2)$$ and I need to find the components of $X^\mu_{\,\,\,\nu}$, $X_\mu^{\,\,\,\nu}$, $X^{(\mu\nu)}$, $X_{[\mu\nu]}$, etc and I just have no idea what this notation even means?

1

There are 1 best solutions below

3
On BEST ANSWER

What you're looking for is the raising/lowering of indices, and the symmetrization/antisymmetrization of tensors.

I'm not certain as to why you have the vector $V$, but I'm going to assume a follow up question of what does $X_{i}^{\;j}V^i$ mean? Since I'll be using that notation at first, I'll go ahead and explain it.

Given some covector $\omega_i$ and some vector $V^j$, the pairing of the two, write $$(\omega,V)=\omega_iV^i=\sum_{i=1}^n\omega_iV^i,$$ so we're using the Einstein summation convention.

If you have a bilinear form $g_{ij}$ and vector $V^i$, then the pairing induces a covector for the form $$\omega_j=g_{ij}V^i=\sum_{i=1}^ng_{ij}V^i.$$

If one is given an inner product $g_{ij}$ , we use the same symbol to represent our new covector, so in the above example, we write $$V_j=g_{ij}V^i.$$

This then generalizes in the natural way to any tensor: $$X^i_{\;j}:=g_{jk}X^{ik},$$ and similarly, $$X_i^{\;j}:=g_{ik}X^{kj}.$$

The symmetrization of a tensor $X^{ij}$ is given by $$X^{(ij)}=\frac{1}{2}(X^{ij}+X^{ji}),$$ and the antisymmetrization is given by $$X^{[ij]}=\frac{1}{2}(X^{ij}-X^{ji}).$$

If you're not given an inner product $g_{ij}$, it's probably assume to the Euclidean one $\delta_{ij}$ (the Kroncker Delta, written as the identity matrix in matrix form).

Typically any first course book on differential geometry should cover basic tensorial manipulations on a vector space. I believe Lee's Introduction to Smooth Manifolds has an entire chapter dedicated to this before jumping into tensor bundles.

Edit:

For a brief introduction to tensors. Let $V$ be a real vector space, and let $V^*$ denote it's dual space (the space of all linear functionals $V\to\mathbb{R}$, which we call covectors). Then one can think of an inner product, $g$, as a symmetric, positive-definite, bilinear form on $V$, that is, $g:V\times V\to\mathbf{R}$. A tensor generalizes this notion. A tensor of $T$ of rank $(k,l)$ is a multilinear map $$T:(V^*\times V^*\times\cdots\times V^*)\times(V\times V\times \cdots\times V)\to\mathbf{R},$$ where there are $k$ copies of $V^*$ in the above, and $l$ copies of $V$.

Essentially, a $(k,l)$-tensor is just a real-valued map that that "eats" $k$ covectors and $l$ vectors and "spits out" a real number. Many tensor manipulations occur, when these tensors don't eat enough covectors or vectors. For example, we have our inner product $g$, and it "eats" a vector $V$, then it still has room to "eat" one more vector before giving a real number. This is equivalent to a covector, and the definition of $V_j=g_{ij}V^i$ as a covector.