Motivation:
Tensors are built from vector spaces because a tensor satisfies the axioms of a vector space under the proper equipment of addition and multiplication. How do we define rigorously the addition and multiplication using abstract algebra? This is a quintessential question because tensors arise naturally in important fields of mathematical physics, and one famous example is the PDE-system found in the Einstein field equation: $$G_{\mu \nu}:=\Lambda g_{\mu \nu}+\kappa T_{\mu \nu}.$$
Based on the work here: https://profoundphysics.com/einstein-field-equations-fully-written-out-what-do-they-look-like-expanded/ , there is no real reason to expand the tensor equation (the Einstein field equation) because it is intentionally made compact by the Einstein summation notation. Hence, it makes sense to generalize the equation (as I attempted previously) and clarify the gist of the question:
- Does there exist a tensor space $(T,+,*)$ endowed with a differential operator $\partial$ such that $\partial$ can act on the tensors $T_j\in (T,+,*)$ and can we notate this as $$\mathcal{T}:=(T,+,*,\partial)?$$
- What are the axioms for $\mathcal{T}$?
- Are there any special structural properties (such as isomorphism) arising between tensor spaces $\mathcal{T}_i$ and $\mathcal{T}_j$?
——-
Wikipedia is really helpful here. For example, $G_{\mu \nu}$ is a tensor with covariant (lower) indices $\mu, \nu$.
What I am really trying to do here is to understand the underlying algebraic concept of a tensor space equipped with an addition and multiplication (such that the notion of a differential operator makes sense on the tensors in the tensor space.) This is the essence of my question. My attempt below is incomplete and needs review.
Attempt 1:
Consider a tensor space $\mathcal{T}$ imbued with $+$ and $*$. i.e., $\mathcal{T}=(T,+,*)$. Frederic P. Schuller, PhD has a 2015 lecture series on this and from his lecture I have discerned the following ideas: A tensor space is composed of vector spaces $V_i$ such that $$V^{T}:=\left\{V_i\right\}_{i\in \mathbb{N}}$$ such that the tensor space itself is a Cartesian product of the elements of $$V^{T}:T=\prod_{i\in \mathbb{N}}V_i.$$
I think this intuition is correct, hence: the following is a vector space: $$\left(\text{Hom}\left(\prod_{i\in \mathbb{N}}V_i\right),\bigoplus,\bigcirc\right).$$
So, to define inversion, we must define the addition and multiplication according to the tenants of multilinear algebra. This is where I am convinced it should be reductive (it should reduce to a simpler form):
Axioms for Addition and Multiplication of Tensors
$\bigoplus$ axioms:
$\bigcirc$ axioms:
Ergo, because a tensor space is an extension of a vector space, tensors (or elements of the tensor space) should have an easily defined inversion process (that satisfies $4_+$) and is consistent with the other axioms involved in the construction of the tensor space. I am confused as to what the precise formula or definition is and whether my attempt is correct.
Attempt Example
For example (in physics): $$G_{\mu \nu}:=\Lambda g_{\mu \nu}\bigoplus T_{\mu \nu} \implies ???\quad \textbf{Eq } 1$$ How would we solve for $T_{\mu \nu}$? (Please note that I am aware of the late James M. Bardeen, PhD who has an exact solution to the EFE.) Would it be as simple as saying: $$T_{\mu \nu}=G_{\mu \nu}\bigoplus \Lambda g_{\mu \nu}?\quad \textbf{Eq }2$$ (Be aware that $\bigoplus$ is being used in two ways: the tensor addition (first equation) and the tensor subtraction (second equation).)
Are the partial differential equations hidden inside the matrices forming the tensors in the dimensional arrays? How do we unpack the hidden PDEs and solve the entire system of PDEs?
I hope I have made my confusion evident while clarifying my own work.
Expansion to PDE
A PDE is a partial differential equation defined, for example, as $$\partial_{x_i}^n y=f(x_1,x_2,x_3,\ldots,x_n)$$. Let $\partial=D$ for notational convenience. Then, the EFE are (in some sense) a collection of equations scattered throughout the quantities in the array that is a tensor for all three tensors in the EFE: the stress-energy tensor, the Ricci curvature tensor (I think), and the Newtonian tensor (representing spacetime). So, EFE represents the deformation of spacetime according to stress, energy, and the underlying geometry of the universe ($g_{\mu\nu}$).
To understand this, imagine once again the cube. Each side of the cube represents a matrix. Each matrix contains inside it a series of PDE elements. What I mean by that is: $$D_{T}:=\sum_{I=1}^{n}D(f(\vec{v})).$$
Note that $n=3$ because we have 3 tensors. However, the real number of total equations is complicated to account for. Assume each matrix is 2 by 2 as an illustration and that the cube tensor analogy holds. That would then mean there are $$\#=4*6*3=(2*2)*6*3=\text{row}\times \text{column} \times \text{sides of cube} \times \text{number of tensors}=r\times c\times |s|\times 3$$
The above formula for $\#$ is not fully correct: I do not know, precisely, what geometric solid is analogous to each tensor because I do not understand the curvature tensor and the Newtonian tensor. I may also be incorrect in stating that the stress energy tensor is a cube-like tensor. Regardless, the point is that the function $\#(r, c, s, 3)=3rcs$ is the correct function when the variables are properly substituted in (assuming the entire equation is 3-dimensional).
I am sure that this has been done before and calculated. Moving on: The primary question is: How do we solve the system of PDEs without being bogged down by the size of the overall system (i.e., the number of terms and equations in the system) AND how would this PDE system be classified? In short, the details of what matters are simply missing from what I have read.
I will now attempt to write out the PDE system using the partial differential operator $D$:
Let $T_{\mu\nu}=\{M_1, M_2,\ldots, M_{\text{dim}(T_{\mu\nu})}\}$.
Similarly: $g_{\mu\nu}=\{N_j\}_{j=1}^{j=\text{dim}(g_{\mu\nu})}$ and let, finally, $$G_{\mu \nu}=\{ G_{q} \}_{1\le q\le \text{dim}(G_{\mu \nu})}.$$
If we assume the intention of the equation was to state the tensors in 4 dimensional spacetime, then $\text{dim}=4$ for all tensors $T,G,$ and $g.$ $\Lambda$ is a scalar and, in actuality, the EFE posses another scalar attached to the stress-energy tensor (call it $\kappa$). So, the equation is actually $$G_{\mu \nu}=\Lambda g_{\mu\nu}+\kappa T_{\mu\nu}.$$
Let us take $\Lambda, \kappa$ to $1$ (the scalar identity $1$. Now, we have (within each matrix): $$D(F(\hat{v}))=G(\hat{w})\quad \hat{v},\hat{w}\in \mathbb{R}^3,$$ with $$D=\partial_{xyz}.$$
Here, I am confused. How do we apply a partial differential operator to a vector (representing a reference frame, presumably) in 3 dimensions when there is clearly a fourth dimension (time)? Can we simply redefine it so that $\hat{v},\hat{w}$ are four-component vectors? Vectors in length, width, height, and time?
This would mean that we replace $\mathbb{R}^3$ with $\mathbb{R}^4$ but I am not sure this is a fair construction since the equation involves Minkowski space, too. I am trying to bridge the gap between the mathematical geometry of Minkowski space and the simplicity of $\mathbb{R}^4$. [The question now involves additional components, and may be outside the scope of the original question (namely, tensor inversion in multilinear algebra).
Further Revision of Basics using Wolfram Mathworld:
According to https://mathworld.wolfram.com/Tensor.html ,
What this means is that the indices in the EFE are very important as they indicate the rank of the tensor (2 for all three tensors) and the $m$ (in the case of the EFE) would be $4$ since, I believe, Einstein used Minkowski 4-space as the basis of his work. Consequently: $$G_{\mu\nu}+\Lambda g_{\mu\nu}=\kappa T_{\mu\nu} \iff \sum_{\mu=1}^{4}\sum_{\nu=1}^{4}\mathcal{G}_{\mu\nu}+\Lambda \sum_{1\le \mu,\nu\le 4}\mathcal{g}_{\mu\nu}=\kappa \sum_{1\le \mu,\nu \le 4}\mathfrak{T}_{\mu\nu} \quad \mathcal{G},\mathcal{g},\mathfrak{T}\in \mathcal{T}:=(T,+,*).$$