Convenience of using universal property of tensor product

186 Views Asked by At

On the wikipedia page of universal property, it says one of the motivations of universal property is

The concrete details of a given construction may be messy, but if the construction satisfies a universal property, one can forget all those details: all there is to know about the construction is already contained in the universal property. Proofs often become short and elegant if the universal property is used rather than the concrete details. For example, the tensor algebra of a vector space is slightly painful to actually construct, but using its universal property makes it much easier to deal with.

Is there a simple and concrete example that demonstrates the convenience of using universal property than explicitly constructing the tensor product space, for example to prove some property about the tensor algebra? As someone from a physics background, I'm very used to doing calculations under a particular basis and it never really struck as inconvenient, and the notion of the universal property is rather abstract.

1

There are 1 best solutions below

2
On

First, I'd like to clarify a bit what I think the wikipedia page is saying. The bolded sentence in the original post does not (should not?) say one can do away with explicit constructions. To me, it suggests that if a construction is given and verified to satisfy the universal property, then that construction can typically be left behind. Typically is a crucial word here. As a researcher in algebra and number theory, I frequently use both universal properties and explicit constructions. There is no judgement made in practice.

Second, an answer for you. I like the example of Qiaochu Yuan in the comments. I'll choose a different one. To clarify, take the universal property of a tensor algebra to be as follows. Fix $k$ to be a field and $V$ a vector space over $k$. Then, the tensor algebra $T(V)$ is a $k$-algebra together with a $k$-linear map $i : V \rightarrow T(V)$ such that if $A$ is any $k$-algebra and $\varphi : V \rightarrow A$ is a $k$-linear map, then there is a unique $k$-algebra map $\psi:T(V) \rightarrow A$ such that $\psi \circ i = \varphi$, i.e. $T(V)$ comes with a map $i$ and makes diagrams like $$\require{AMScd} \begin{CD} V @>{i}>> T(V)\\ @| @V{\psi}VV\\ V @>{\varphi}>> A \end{CD}$$ commute. (Sorry for the diagram being square with =. I don't know how to make a triangle diagram here.)

Now, based just on this, we have the following lemma:

Lemma: If $V$ has dimension $n$ over $k$, then $T(V)$ is isomorphic to the (edit: non-commutative) polynomial $k$-algebra in $n$ variables.

In fact, denote by $R = k[x_1,\dotsc,x_n]$ the (edit: non-commutative) polynomial algebra in $n$ indeterminate variables $x_1,\dotsc,x_n$. Let $e_1,\dotsc,e_n$ be a basis of $V$ over $k$. Define a $k$-linear map $i: V \rightarrow R$ by $i(e_j) = x_j$. Given any linear map $\varphi:V \rightarrow A$ we define $\psi: R \rightarrow A$ to be the unique (!) $k$-algebra map given by $\psi(x_i) = \varphi(e_i)$. This construction satisfies the property required by $T(V)$, and therefore $T(V) \cong R$.

I have a couple remarks. The identification $T(V) \cong R$ is unique, but you have to remember that this means unique with respect to it extending the natural $k$-linear map $V \rightarrow T(V)$ and $V \rightarrow R$. In my notation, the isomorphism $T(V) \cong R$ depends exactly on the choice of basis $\{e_i\}$. Don't give up your dream of using a basis!

The other note is that in the construction of the lemma, I implicitly used the universal property of $R = k[x_1,\dotsc,x_n]$, which says that to give a $k$-algebra map $R \rightarrow A$ s to specify the image of the $x_i$. This is the universal property of the "free algebra on $n$ generators". This example therefore illustrates a basic phenomenon: universal properties are particularly adept at being compared to one another. The theory is a language and, naturally, things make the most sense when we stick to one language at a time.