Today I learned about Tensors as multilinear maps. I usually think of tensors as a multidimensional array of numbers with fixed transformation laws, and I am having trouble understanding how tensors could be a multilinear map of a set of dual vectors and vectors onto the set R. More specifically, I am having a hard time understanding the concept of a multilinear map. A definition of tensors similar to how I think of them: http://en.wikipedia.org/wiki/Tensor#As_multidimensional_arrays A definition of tensors in terms of multilinear maps: http://en.wikipedia.org/wiki/Tensor#As_multilinear_maps or "Spacetime and Geometry: An introduction to General Relativity" by Sean Carrol, page 21.
2026-03-27 20:12:08.1774642328
Tensors as Multilinear maps?
2k Views Asked by user207766 https://math.techqa.club/user/user207766/detail At
1
There are 1 best solutions below
Related Questions in DIFFERENTIAL-GEOMETRY
- Smooth Principal Bundle from continuous transition functions?
- Compute Thom and Euler class
- Holonomy bundle is a covering space
- Alternative definition for characteristic foliation of a surface
- Studying regular space curves when restricted to two differentiable functions
- What kind of curvature does a cylinder have?
- A new type of curvature multivector for surfaces?
- Regular surfaces with boundary and $C^1$ domains
- Show that two isometries induce the same linear mapping
- geodesic of infinite length without self-intersections
Related Questions in TENSORS
- Linear algebra - Property of an exterior form
- How to show that extension of linear connection commutes with contraction.
- tensor differential equation
- Decomposing an arbitrary rank tensor into components with symmetries
- What is this notation?
- Confusion about vector tensor dot product
- Generalization of chain rule to tensors
- Tensor rank as a first order formula
- $n$-dimensional quadratic equation $(Ax)x + Bx + c = 0$
- What's the best syntax for defining a matrix/tensor via its indices?
Related Questions in MULTILINEAR-ALGEBRA
- How to get the missing brick of the proof $A \circ P_\sigma = P_\sigma \circ A$ using permutations?
- How to prove that $f\otimes g: V\otimes W\to X\otimes Y$ is a monomorphism
- Is the natural norm on the exterior algebra submultiplicative?
- A non-zero quantity associated to an invertible skew-symmetric matrix of even order.
- Silly Question about tensor products and universal property
- Why are bilinear maps represented as members of the tensor space $V^*\otimes V^*$ opposed to just members of the tensor space $V\otimes V$?
- universal property of the $n$-fold tensor product
- If $f:(\mathbb{K}^n)^n \rightarrow \mathbb{K}$ is multilinear and alternating, prove: $f(T(u_1),T(u_2),...,T(u_n)=\det(A)f(u_1,...,u_n)$
- Image of Young symmetrizer on tensor product decomposition
- Proof of $Af = \sum_{\sigma \in S_{k}} (Sgn \sigma) \sigma f$ is an alternating function.
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
Background: I work in the field of numerical relativity. I've read Carroll's book, but not recently.
It's pretty common for physics students to reach this point in their education, not really knowing anything about what tensors are or how they're talked about in higher mathematics. That's not really the students' fault. If your education was anything like mine, your first exposure to this stuff probably came from an electromagnetism course, or maybe a classical mechanics course. You stuck with vector calculus, and maybe the odd matrix now and then to do transformations, and that's all you needed.
Let's start with matrices, though: you might've thought of matrices as arrays of numbers, just with some funny "matrix multiplication" operation that lets you multiply matrices and vectors to get other vectors. That's good enough to do the computation, but it's a very narrow way of looking at things.
Instead, think of the matrix abstractly as corresponding to a vector-valued linear function of a vector. It's a vector field! Right? A vector-valued function of a vector is, according to everything you've been taught, a vector field. The only additional property we're imposing is that this function be linear.
Example: consider the matrix
$$T = \begin{bmatrix} a & b & c \\ d & e & f \\ g & h & i \end{bmatrix}$$
You can write $T$ like a function. Given a vector $\vec v$ and a basis $\vec u_1, \vec u_2, \vec u_3$, you could write
$$\begin{align*} T(\vec v) &= [(a \vec u_1 + b \vec u_2 + c \vec u_3) \cdot \vec v ] \vec u_1 \\ & + [(d \vec u_1 + e \vec u_2 + f \vec u_3) \cdot \vec v ] \vec u_2 \\ & + [(g \vec u_1 + h \vec u_2 + i \vec u_3) \cdot \vec v ] \vec u_3\end{align*}$$
Each of those dot products is just doing the row-column approach to matrix multiplication that you already know. This expression, for a general matrix, is rather tedious and tiresome, but most geometric transformations can be written more compactly.
So, a matrix isn't just an array of numbers with some arcane multiplication rule attached. It corresponds to a linear function--a linear map, as mathematicians would say. You can see in the above example that the components of the matrix correspond with the basis we used to write out the function $T$. If you change basis, you change components. That much becomes obvious when written this way.
General tensors correspond to maps just as matrices do. Here, we showed a matrix can correspond to a map from a vector to a vector. A tensor could map a vector to another vector, or a vector to a covector, or several vectors to a scalar, for instance.
On component transformation laws: physicists usually have the point of view that a change of basis doesn't change the underlying vector being described; it merely changes the basis used to describe that vector. The change of basis means you have different vector components, but the vector itself hasn't changed. When you think of a tensor as a map---as some linear function--you ought to be able to describe the arguments in any basis you like. This changes the components of the tensor as described in that basis, but not the tensor itself.
Now, even this answer is only just the tip of the iceberg. I would definitely criticize physicists for not presenting tensors as linear functions; if they had put more emphasis on this, the transformation laws would be obvious from the chain rule and hardly need comment.
However, I think a physicist should not be so eager to treat geometric objects (like vectors and such) as general tensors. You can do this, but doing so deprives you of the geometric intuition you have probably built up. Instead, geometric objects like tangents directions to curves, tangent planes to surface, and the like, should be thought of as elements of an exterior (or clifford) algebra instead. These formalisms let you ignore the "map" definition of vectors and such, so you can focus on building planes, volumes, and the like.
For calculus at this level, it seems the mathematician's preferred tool of choice is differential forms. A physicist might find forms inelegantly integrated into Carroll's text alongside the vanilla, index-manipulation sludge of plain old tensor calculus. Do yourself a favor: at the least, learn forms. It makes all the calculus here as easy as electromagnetism's vector calculus was. I have issues with some of the conventions that forms people tend to use--for reasons totally irrelevant to general relativity, forms people prefer to do everything in terms of forms, and not in terms of actual $k$-vector fields, which is an arbitrary choice, but it leads to circuitous garbage like defining inner products in terms of the Hodge star, which is backwards as sin--but it's still a big improvement over index manipulation.