In Geometric Algebra, how would one express the result of a tensor product in the language of GA?

1.6k Views Asked by At

Thanks for your time and effort. I appreciate your help.

I'm new to geometric algebra and I get that it supersedes linear algebra.

I was wondering though how I could learn to express a tensor product in terms of geometric algebra?

I asked an earlier example about Linear Operators and got a great response from Muphrid

Here's the example to start with, then I'll get to my tensor product question,

Suppose I had the matrix operator:

\begin{vmatrix} \mathbf{1} & \mathbf{ 1} & \mathbf{-1} \\ \mathbf{0} & \mathbf{ 1} & \mathbf{-1} \\ \mathbf{0} & \mathbf{-1} & \mathbf{ 1} \ \end{vmatrix}

Muphrid responded:

You could express it as a function. Let your operator be $\underline T$. It could be described by

$$\begin{align*}\underline T(e_1) &= e_1 \\ \underline T(e_2) &= e_1 + e_2 - e_3 \\ \underline T(e_3) &= -e_1 -e_2 + e_3\end{align*}$$

You could instead use dot products to combine this into a single expression. Let $a$ be an arbitrary vector, and you have

$$\underline T(a) = (a \cdot e_1) e_1 + (a \cdot e_2) (e_1 + e_2 - e_3) + (a \cdot e_3) (-e_1 - e_2 + e_3)$$

In particular, notice that the last column is just the negative of the second column, so the expression simplifies to

$$\underline T(a) = (a \cdot e_1) e_1 + (a \cdot e_2 - a \cdot e_3)(e_1 + e_2 - e_3)$$

There is (so far) nothing inherently GA-like to expressing a linear operator this way, but it is a bit more amenable to some of the operations you might be asked to perform that come from GA.

Now for my tensor product question using Muphrid's response as a template.

If we had a tensor with elements from a tensor product of two 2x2 tensors:

$$\\e_{ij} \otimes f_{kl}$$

then the tensor product would be:

\begin{vmatrix} \mathbf{e_{11}f_{11}} & \mathbf{e_{11}f_{21}} & \mathbf{e_{21}f_{11}} & \mathbf{e_{21}f_{21}}\\ \mathbf{e_{11}f_{12}} & \mathbf{e_{11}f_{22}} & \mathbf{e_{21}f_{12}} & \mathbf{e_{21}f_{22}}\\ \mathbf{e_{12}f_{11}} & \mathbf{e_{12}f_{21}} & \mathbf{e_{22}f_{11}} & \mathbf{e_{22}f_{21}}\\ \mathbf{e_{12}f_{12}} & \mathbf{e_{12}f_{22}} & \mathbf{e_{22}f_{12}} & \mathbf{e_{22}f_{22}}\\ \end{vmatrix}

Now how would I express this in terms of Geometric Algebra?

Would I use the same process that Muphrid showed like this:

Let your operator be $\underline T$. It could be described by

$$\begin{align*} \underline T(e_?) &= e_{11}f_{11} + e_{11}f_{12} + e_{12}f_{11} + e_{12}f_{12}\\ \underline T(e_??) &= e_{11}f_{21} + e_{11}f_{22} + e_{12}f_{21} + e_{12}f_{22} \\ \underline T(e_??) &= e_{21}f_{11} + e_{21}f_{12} + e_{22}f_{11} + e_{22}f_{12} \\ \underline T(e_????) &= e_{21}f_{21} + e_{21}f_{22} + e_{22}f_{21} + e_{22}f_{22}\end{align*}$$

Continuing Murphrid's process, You could instead use dot products to combine this into a single expression.
Let $a$ be an arbitrary vector, and you have

$$\underline T(a) = (a \cdot e_1) e_{11}f_{11} + e_{11}f_{12} + e_{12}f_{11} + e_{12}f_{12} + (a \cdot e_2) (e_{11}f_{21} + e_{11}f_{22} + e_{12}f_{21} + e_{12}f_{22}) + (a \cdot e_3) (e_{21}f_{11} + e_{21}f_{12} + e_{22}f_{11} + e_{22}f_{12}) + (a \cdot e_4) (e_{21}f_{21} + e_{21}f_{22} + e_{22}f_{21} + e_{22}f_{22})$$

Is this correct for tensors, or is there something else that should be happening since it is a tensor?

I get the feeling instead that I should take the original 2X2 tensors that created the 4x4 tensor, use Muphrid's process on each of the 2x2, then multiply the two, something like:

$$\underline T(a) = (a \cdot e_1) (e_{11} + e_{12}) + (a \cdot e_2) (e_{21} + e_{22})$$

$$\underline T(b) = (b \cdot f_1) (f_{11} + f_{12}) + (b \cdot e_2) (f_{21} + f_{22})$$

$$\underline (T(a))( \underline T(b)) = (a \cdot e_1)(b \cdot f_1) (e_{11}f_{11} + e_{11}f_{12} + e_{12}f_{11} + e_{12}f_{12}) + (a \cdot e_1)(b \cdot e_2)(e_{11}f_{21} + e_{11}f_{22} + e_{12}f_{21} + e_{12}f_{22}) + (a \cdot e_2)(b \cdot f_1) ((e_{21}f_{11} + e_{21}f_{12} + e_{22}f_{11} + e_{22}f_{12}) + (a \cdot e_2)(b \cdot e_2)(e_{21}f_{21} + e_{21}f_{22} + e_{22}f_{21} + e_{22}f_{22})$$

Which is real similar. Am I missing something? Again, I would appreciate any help.

And thanks to Muphrid for the previous help.

2

There are 2 best solutions below

3
On

Note: what comes below is merely an attempt at trying to make sense of tensor products.

Some insight may be gained from the wiki article on tensor products. In particular, wiki says that, if you have two maps $\underline S$ and $\underline T$ that you want to find a tensor product of, then the combined map is

$$(\underline S \otimes \underline T)(u \otimes v) = \underline S(u) \otimes \underline T(v)$$

In my edit to my answer from the previous question, I claimed that a tensor product of two vectors can correspond to a linear map expressed without tensor products:

$$a \otimes b \mapsto \underline M: V \to V, \underline M(c) = a(b \cdot c), \forall c \in V$$

Similarly, then, we should have

$$\underline S(u) \otimes \underline T(v) \mapsto \underline M: V \times V \times V \to V, \underline M(u, v, c) = \underline S(u) [\underline T(v) \cdot c]$$

It may be easier instead to consider the map

$$M(u, v, c, d) = [\underline S(u) \cdot d][\underline T(v) \cdot c]$$

and now by plugging in basis vectors we can extract components. At this point, however, I must stop. It's not clear to me what the usual matrix form of the tensor product should be acting upon. While I can easily see that we should get a 4-index tensor (and wiki bears this out also), the arrangement of components into a matrix puzzles me, and it's not clear to me at all what kind of vector (or matrix) should be acted upon.

Edit: nevertheless, I do feel that this is in the right direction for translating tensor products. Each product of elements $S_{ij} T_{k\ell}$ is obtained through unique linear arguments to the map $M_{ijk\ell}$, so this procedure seems to have all the same information as the tensor product.

3
On

The computational process of multiplying two tensors is mathematically isomorphic to the Clifford multiplication, i.e. it produces the same result. The only difference is the way you conceptualize the problem, which can be considerably simpler in the Clifford representation.

For example the 4x4 matrix of your tensor product can be viewed as a simple geometric structure in Clifford algebra, composed of basis vectors e1, e2, and the bivectors obtained by multiplying every combination of basis vectors e1*e1, e1*e2, e2*e1, e2*e2, which you can picture as the plane areas defined by the wedge product of the two vectors (the surface of the parallelogram defined by the two vectors) but that plane area also has a twist, that rotates from a to b for a*b, and from b to a for b*a, by the angle between a and b.

Clifford algebra is viewing your matrix of numbers as a three-dimensional structure. Clifford multiplication of two tensors e and f is expressed in Clifford algebra as the simple product e*f, the rest is implicit in the math!

Now if you are asking about the computational process for obtaining the result of that tensor product, then that computation will be mathematically isomorphic to doing the equivalent in conventional tensors, and I think I read your meta-question as "why bother with Clifford algebra when you can do the same thing the old fashioned way?". The advance of Clifford algebra, what makes it distinct from regular algebra, is that these are no longer tensor products, but just plain products, e*f, and they are equivalent to tensors if e and f are of sufficiently high grade, and the products of higher grade clifs (multivectors) correspond to third-order, fourth-order, and higher order tensors. The point is that these higher order products need not have their own specific "tensor product" rules, they are all just simple mulitplications, the details are implicit in the math.