Conceitual difference between $\delta_{ij}$ and $\delta^i_j$

149 Views Asked by At

Question

How is related $\delta_{ij}$ with $\delta^i_j$ ? Here $\delta_{ij}= \begin{cases} 1 \qquad\text{if} \qquad i=j \\0 \qquad \text{if} \qquad i\neq j\end{cases}$

Context

I'm watching this playlist by Dr.Schuller and at 53 minutes he made that observation when he was defining $<e_1,e_2>=\delta_{ij}$ (an the inner product in some Hilbert space). Then he enphazided that $\delta_{ij}$ is not the components of the identity map and actually $\delta_{ij}$ is related to bilinear forms and $\delta^i_j$ with endomorphism.

It seems that there is something more fundamental about this that just lowering or raising the index of $\delta$, but I didn't get it right. Thanks for any help.

1

There are 1 best solutions below

1
On BEST ANSWER

He is being very careful to distinguish a tensor over a vector space vs its components with respect to a basis, which (unfortunately) are usually treated as being the same in several contexts. In this particular case, his main point is to warn the students to not confuse different tensors together just because their components might have the same numerical value (in a given basis).

By the way, he has a few other lecture series; one on general relativity (see lecture 3 on multilinear algebra) and one called the Geometric anatomy of physics (or something like that; see lecture 8 on Tensor space theory over a field). You might find these lectures helpful.


Just so we are absolutely clear, I'll gather some definitions/theorems here. In what follows, let $V$ be a finite-dimensional vector space over a field $\Bbb{F}$ (later we'll take it to be $\Bbb{R}$, but for now it's not necessary), and we denote $V^*$ to be the dual space. Then, we have the important definition:

Definition.

An $(r,s)$ tensor over $V$ is by definition a multi-linear map $\underbrace{V^* \times \dots \times V^*}_{r \text{ times}} \times \underbrace{V \times \dots \times V}_{s \text{ times}} \to \Bbb{F}$. The set of all such $(r,s)$ tensors will be denoted $T^r_s(V)$.

We also have an important theorem:

Theorem.

If $V$ is a finite-dimensional vector space over $\Bbb{F}$, then $V$ is isomorphic to $(V^*)^* \equiv V^{**}$, the double dual space. In fact, the map $\iota : V \to V^{**}$ defined by setting for all $v \in V$, and all $\alpha \in V^*$: \begin{align} \left(\iota(v) \right)(\alpha) &:= \alpha(v) \end{align} is an isomorphism.

Using the theorem above, one can show that $T^1_1(V)$ and $\text{End}(V)$ are canonically isomorphic; in other words, you can think of any $(1,1)$ tensor as an endomorphism or vice-versa. Next, we make the definition of the components of a tensor relative to a basis:

Definition.

Let $T$ be an $(r,s)$ tensor over $V$. Let $\{e_1, \dots, e_n\}$ be a basis of $V$, and let $\{\epsilon^1, \dots, \epsilon^n\}$ be the dual basis of $V^*$. Then, the collection of numbers \begin{align} T(\epsilon^{i_1}, \dots \epsilon^{i_r}, e_{j_1}, \dots, e_{j_s}) \in \Bbb{F} \end{align} (where the indices are $i_1, \dots, i_r, j_1, \dots, j_s \in \{1, \dots, n\}$) are called the components of the tensor $T$ with respect to the basis $\{e_1, \dots, e_n\}$ (and the corresponding dual basis). These numbers are typically written as $T^{i_1 \dots i_r}_{j_1 \dots j_s}$ for short.

The reason for making this definition of components is because if you know all these components, you can uniquely reconstruct the tensor as \begin{align} T = \sum T^{i_1 \dots i_r}_{j_1 \dots j_s}\,\, \iota(e_{i_1}) \otimes \dots \otimes \iota(e_{i_r}) \otimes \epsilon^{j_1} \otimes \dots \otimes \epsilon^{j_s}. \end{align} In other words, once you choose a basis for the vector space, all the information about the $(r,s)$ tensor $T$ is contained entirely in its components, $T^{i_1 \dots i_r}_{j_1 \dots j_s}$, relative to that basis (notice how an $(r,s)$ tensor has $r$ indices upstairs and $s$ indices downstairs). But for the sake of fun, let me just repeat again: all of this requires a choice of basis.


Now, let $V$ be a finite-dimensional vector space over $\Bbb{R}$, and let $g: V \times V \to \Bbb{R}$ be an inner product (he probably denoted it as $\langle \cdot, \cdot \rangle $ but this takes much longer to type so I'll just call it $g$). Also, let $I: V \to V$ denote the identity map. We observe the following:

  • By definition, an inner product (on a REAL vector space) is a bilinear map $V \times V \to \Bbb{R}$, i.e it is a $(0,2)$ tensor on $V$. As such, if we choose a basis, we can start talking about what its components are.
  • The identity map $I: V \to V$ is an endomorphism. By my earlier remark, for finite-dimensional $V$, $\text{End}(V)$ is isomorphic to $T^1_1(V)$. I shall denote the image of the identity endomorphism, $I$, under the isomorphism $\text{End}(V) \to T^1_1(V)$, as tensor as $\tilde{I} : V^* \times V \to \Bbb{R}$. Since it is a tensor, we can start talking about its components relative to a basis.

So, now, let $\{e_1, \dots, e_n\}$ be a basis for $V$, and $\{\epsilon^1, \dots \epsilon^n\}$ be the dual basis. Then, according to my above definition, we denote the components of the tensors as \begin{align} g_{ij}:= g(e_i, e_j) \quad \text{and} \quad (\tilde{I})^i_j &:= \tilde{I}(\epsilon^i, e_j) \end{align} Well, what are these numbers equal to? For the second case, if you really keep track of how the isomorphism was defined, you'll see that $\tilde{I}(\epsilon^i, e_j) = \epsilon^i(e_j)$ (evaluating the dual basis vector on the vector of the original vector space), and this is equal to \begin{align} \tilde{I}(\epsilon^i, e_j) &= \epsilon^i(e_j) = \begin{cases}1 & \text{if $i = j$} \\ 0 & \text{if $i \neq j$} \end{cases} \end{align} (the final equality is by definition of the dual basis).

What is $g(e_i, e_j)$? Well, we can't say without further information. But, if we say that $\{e_1, \dots e_n\}$ is an orthonormal basis, then by definition, \begin{align} g_{ij} &= g(e_i, e_j) = \begin{cases}1 & \text{if $i = j$} \\ 0 & \text{if $i \neq j$} \end{cases} \end{align} (the final equality is by definition of "orthonormal").

So, what we see is that for all $i,j$, the numbers $g_{ij}$ and $(\tilde{I})^i_j$ are equal. But does this mean the actual tensors $g$ and $\tilde{I}$ are equal? Of course not; they don't even have the same "type", one is a $(0,2)$ tensor, while the other is a $(1,1)$ tensor. This is the point which the prof is trying to make; he's trying to avoid confusion of two different tensors $g$ and $\tilde{I}$, given the fact that in a particular orthonormal basis, their components are equal in numercial value: $g_{ij} = (\tilde{I})^i_j$.


So, now for the kronecker delta, there's three ways to interpret it; the first is purely as a symbol, so $\delta^i_j$, $\delta_{ij}$, $\delta^{ij}$, $^i\delta_j$ etc are all just short symbols for the piecewise number assigner: \begin{align} \begin{cases} 1 & \text{if $i = j$} \\ 0 & \text{if $i \neq j$} \end{cases} \end{align} (of course, no one ever writes $^i \delta_j$ or something like that; I just wrote some nonsense to emphasize the fact that this kronecker delta should be interpreted entirely as a symbol, standing nothing more than a concise way of stating a bunch of numbers).

But as I've shown above, there exist tensors whose components with respect to a particular basis are equal in numerical value to the kronecker-delta symbol; as a result of this, sometimes what people do is rather than writing $g$ and $\tilde{I}$ as the names of the tensor, the typically write $\delta$ itself to mean the abstract tensor. In this case, it is very dangerous to mix up the index placement, because then you'll be confusing the type of tensor you actually have (physicists very often keep track of their tensors by the number of indices and their placement). So, ultimately, the prof just wants to avoid this mixup.