Thanks for your time and effort. I appreciate your help.
I'm new to geometric algebra and I get that it supersedes linear algebra.
I was wondering though how I could learn to express a tensor product in terms of geometric algebra?
I asked an earlier example about Linear Operators and got a great response from Muphrid
Here's the example to start with, then I'll get to my tensor product question,
Suppose I had the matrix operator:
\begin{vmatrix} \mathbf{1} & \mathbf{ 1} & \mathbf{-1} \\ \mathbf{0} & \mathbf{ 1} & \mathbf{-1} \\ \mathbf{0} & \mathbf{-1} & \mathbf{ 1} \ \end{vmatrix}
Muphrid responded:
You could express it as a function. Let your operator be $\underline T$. It could be described by
$$\begin{align*}\underline T(e_1) &= e_1 \\ \underline T(e_2) &= e_1 + e_2 - e_3 \\ \underline T(e_3) &= -e_1 -e_2 + e_3\end{align*}$$
You could instead use dot products to combine this into a single expression. Let $a$ be an arbitrary vector, and you have
$$\underline T(a) = (a \cdot e_1) e_1 + (a \cdot e_2) (e_1 + e_2 - e_3) + (a \cdot e_3) (-e_1 - e_2 + e_3)$$
In particular, notice that the last column is just the negative of the second column, so the expression simplifies to
$$\underline T(a) = (a \cdot e_1) e_1 + (a \cdot e_2 - a \cdot e_3)(e_1 + e_2 - e_3)$$
There is (so far) nothing inherently GA-like to expressing a linear operator this way, but it is a bit more amenable to some of the operations you might be asked to perform that come from GA.
Now for my tensor product question using Muphrid's response as a template.
If we had a tensor with elements from a tensor product of two 2x2 tensors:
$$\\e_{ij} \otimes f_{kl}$$
then the tensor product would be:
\begin{vmatrix} \mathbf{e_{11}f_{11}} & \mathbf{e_{11}f_{21}} & \mathbf{e_{21}f_{11}} & \mathbf{e_{21}f_{21}}\\ \mathbf{e_{11}f_{12}} & \mathbf{e_{11}f_{22}} & \mathbf{e_{21}f_{12}} & \mathbf{e_{21}f_{22}}\\ \mathbf{e_{12}f_{11}} & \mathbf{e_{12}f_{21}} & \mathbf{e_{22}f_{11}} & \mathbf{e_{22}f_{21}}\\ \mathbf{e_{12}f_{12}} & \mathbf{e_{12}f_{22}} & \mathbf{e_{22}f_{12}} & \mathbf{e_{22}f_{22}}\\ \end{vmatrix}
Now how would I express this in terms of Geometric Algebra?
Would I use the same process that Muphrid showed like this:
Let your operator be $\underline T$. It could be described by
$$\begin{align*} \underline T(e_?) &= e_{11}f_{11} + e_{11}f_{12} + e_{12}f_{11} + e_{12}f_{12}\\ \underline T(e_??) &= e_{11}f_{21} + e_{11}f_{22} + e_{12}f_{21} + e_{12}f_{22} \\ \underline T(e_??) &= e_{21}f_{11} + e_{21}f_{12} + e_{22}f_{11} + e_{22}f_{12} \\ \underline T(e_????) &= e_{21}f_{21} + e_{21}f_{22} + e_{22}f_{21} + e_{22}f_{22}\end{align*}$$
Continuing Murphrid's process, You could instead use dot products to combine this into a single expression.
Let $a$ be an arbitrary vector, and you have
$$\underline T(a) = (a \cdot e_1) e_{11}f_{11} + e_{11}f_{12} + e_{12}f_{11} + e_{12}f_{12} + (a \cdot e_2) (e_{11}f_{21} + e_{11}f_{22} + e_{12}f_{21} + e_{12}f_{22}) + (a \cdot e_3) (e_{21}f_{11} + e_{21}f_{12} + e_{22}f_{11} + e_{22}f_{12}) + (a \cdot e_4) (e_{21}f_{21} + e_{21}f_{22} + e_{22}f_{21} + e_{22}f_{22})$$
Is this correct for tensors, or is there something else that should be happening since it is a tensor?
I get the feeling instead that I should take the original 2X2 tensors that created the 4x4 tensor, use Muphrid's process on each of the 2x2, then multiply the two, something like:
$$\underline T(a) = (a \cdot e_1) (e_{11} + e_{12}) + (a \cdot e_2) (e_{21} + e_{22})$$
$$\underline T(b) = (b \cdot f_1) (f_{11} + f_{12}) + (b \cdot e_2) (f_{21} + f_{22})$$
$$\underline (T(a))( \underline T(b)) = (a \cdot e_1)(b \cdot f_1) (e_{11}f_{11} + e_{11}f_{12} + e_{12}f_{11} + e_{12}f_{12}) + (a \cdot e_1)(b \cdot e_2)(e_{11}f_{21} + e_{11}f_{22} + e_{12}f_{21} + e_{12}f_{22}) + (a \cdot e_2)(b \cdot f_1) ((e_{21}f_{11} + e_{21}f_{12} + e_{22}f_{11} + e_{22}f_{12}) + (a \cdot e_2)(b \cdot e_2)(e_{21}f_{21} + e_{21}f_{22} + e_{22}f_{21} + e_{22}f_{22})$$
Which is real similar. Am I missing something? Again, I would appreciate any help.
And thanks to Muphrid for the previous help.
Note: what comes below is merely an attempt at trying to make sense of tensor products.
Some insight may be gained from the wiki article on tensor products. In particular, wiki says that, if you have two maps $\underline S$ and $\underline T$ that you want to find a tensor product of, then the combined map is
$$(\underline S \otimes \underline T)(u \otimes v) = \underline S(u) \otimes \underline T(v)$$
In my edit to my answer from the previous question, I claimed that a tensor product of two vectors can correspond to a linear map expressed without tensor products:
$$a \otimes b \mapsto \underline M: V \to V, \underline M(c) = a(b \cdot c), \forall c \in V$$
Similarly, then, we should have
$$\underline S(u) \otimes \underline T(v) \mapsto \underline M: V \times V \times V \to V, \underline M(u, v, c) = \underline S(u) [\underline T(v) \cdot c]$$
It may be easier instead to consider the map
$$M(u, v, c, d) = [\underline S(u) \cdot d][\underline T(v) \cdot c]$$
and now by plugging in basis vectors we can extract components. At this point, however, I must stop. It's not clear to me what the usual matrix form of the tensor product should be acting upon. While I can easily see that we should get a 4-index tensor (and wiki bears this out also), the arrangement of components into a matrix puzzles me, and it's not clear to me at all what kind of vector (or matrix) should be acted upon.
Edit: nevertheless, I do feel that this is in the right direction for translating tensor products. Each product of elements $S_{ij} T_{k\ell}$ is obtained through unique linear arguments to the map $M_{ijk\ell}$, so this procedure seems to have all the same information as the tensor product.