Can you illustrate the use of coordinate-free notations that serve as an alternative to Einstein summation notation with an example?

672 Views Asked by At

"Abstract index" and "coordinate free notations" are often submitted as alternatives to Einstein Summation notation. Could you illustrate their use using an example?

Here's a sum written in Einstein's notation:

$a_{ij}b_{kj} = a_{i}b_{k}$

How would you rewrite it in a modern way?

3

There are 3 best solutions below

0
On

I guess I'd write that as $A B^{t}$. But I don't think that gets at the heart of what you're asking.

And the following may help or not...but I offer it up anyhow. To really get a grasp of the difference of the two approaches, pick up a classical differential geometry book -- something like O'Neill's Elementary Differential Geometry, or perhaps do Carmo's book, although it's a bit less classical, or Millman and Parker --- sort of a happy mean between the other two. Get yourself familiar with the first and second fundamental forms, and Gaussian and mean curvatures. Then pick up Milnor's book on Morse Theory and look at part II (I think), which is a quick intro to differential geometry, done almost entirely from the point of view of the covariant derivative, and without coordinates. Each approach has its virtues, and looking at these two might help you understand them. It did for me.

0
On

I think you have some misconceptions about what you are talking about. In any given basis any linear operator is represented by a fixed matrix. When we talk about linear operators or tensors we are talking coordinate free, when we talk about matrices then we are not. When you write $a_{ij}b_{kj}$ you already have some coordinates. If $A=a_{ij}$ and $ B=b_{kj}$ and $A$ is the matrix representation of some $\varphi$ from some linear space $W$ to another space $U$ and $B$ is the representation of some $\psi$ from the space $V$ to the space $W$ then your product is just the function composition $\varphi\circ\psi$ defined without the use of any coordinates.

P.S. what you have written on the other hand make little sense because there is not any relation between $a_{ij}$ and $a_{i}$ they are not the same objects, one is a linear form and the other is a bilinear form, using the concepts that @johannesvalks tried to introduce. If your question is to write for example the expression $a_ib_i$ in a coordinate free way then I'd say that would be the scalar product: if $v\in V,w\in W$ $\Gamma:V\times W\to K$ and in some basis $\{v_i\}$ on $V$ and $\{w_i\}$ on $W$ the two vectors can be written $v=a^iv_i, w=b^iw_i$ and in such a basis $\Gamma(v_i,w_j)=\delta_{ij}$ then your sum can be written simply by $\Gamma(v,w)=a^ib^i$.

0
On

Abstractly, tensors are elements of the tensor product of vector spaces. For instance, a vector, which is a (0,1)-tensor, is an element of $V$. These are also called covariant tensors and written (in coordinate form) $a_i$. A co-vector, or contravariant vector, is an element of the dual space $V^*$. Elements of the dual space are linear maps from $V$ to your underlying field (usually $\mathbb{R}$ or $\mathbb{C}$). Contravariant vectors are then written $b^k$, with the upper index denoting that they are contravariant. To form a multi-indexed tensor, we can just take the tensor product of spaces. For example, $c_i^j$ is a (1,1)-tensor, and is an element of $V^* \otimes V$. The contraction is simple is this abstract notation, since contravariant vectors are exactly maps from $V$ to $\mathbb{R}$. For $a_{ij} b^{ik}$, the common $i$ tells us that we need to contract those two indices. Thus we go from an element $a_{ij} b^{ik} \in V^* \otimes V^* \otimes V \otimes V$, and we take the first $V^*$ and the first $V$, and form the map $V^* \otimes V \to \mathbb{R}$. This leaves us with an element $p_j^k \in V^* \otimes V \otimes \mathbb{R} = V^* \otimes V$.