Can vector space dualization be made into a covariant functor?

317 Views Asked by At

The contravariant endofunctor $$\mathbf{Set}^{op} \rightarrow \mathbf{Set}$$ $$X \mapsto [X,2]$$

can be made into a covariant endofunctor $$\mathbf{Set} \rightarrow \mathbf{Set}$$ $$X \mapsto \mathcal{P}(X)$$

in such a way that:

  • these functors do the same thing to objects
  • they do the same thing to isomorphisms, excepting that $X \mapsto [X,2]$ flips the direction.

This isn't too surprising, since $X \mapsto [X,2]$ can be seen as "completing" a set to a suplattice, and then forgetting the suplattice structure. It's probably fair to say that the covariant structure on this functor comes essentially comes from this fact (though perhaps there is a better way at looking at it).

Anyway, I'm wondering if we can do something similar to the dual-vectorspace functor

$$\mathbb{R}\mathbf{Mod}^{op} \rightarrow \mathbb{R}\mathbf{Mod}$$

$$X \mapsto \mathbb{R}\mathbf{Mod}(X,\mathbb{R}).$$

It seems likely that we can, since I tend to think of dual vector spaces as "completing" a vector space to a gadget in which certain infinite sums exist, and then forgetting this extra structure.

2

There are 2 best solutions below

0
On BEST ANSWER

No, this is not possible. Let's restrict our attention to just finite-dimensional vector spaces, and the skeleton of this category whose objects are $\mathbb{R}^n$ and whose morphisms are matrices. Your question then is: is there an operation $F$ which sends $m\times n$ matrices to $m\times n$ matrices for all $m$ and $n$, preserves multiplication of matrices, and satisfies $F(A)=(A^T)^{-1}$ when $A$ is a square invertible matrix?

Let's suppose $F$ is such an operation and do some computations with it. Let $v=\begin{pmatrix} 1 \\ 0\end{pmatrix}$, $A=\begin{pmatrix} 1 & 1 \\ 0 & 1\end{pmatrix}$, and $B=\begin{pmatrix} 1 & 0 \\ 0 & 2\end{pmatrix}$, and $w=\begin{pmatrix} 1 & 0\end{pmatrix}$.

We have $F(A)=(A^T)^{-1}=\begin{pmatrix} 1 & 0 \\ -1 & 1\end{pmatrix}$. We also have $Av=v$, and hence $F(A)F(v)=F(Av)=F(v)$. This means that $F(v)$ must have the form $\begin{pmatrix} 0 \\ x\end{pmatrix}$ for some $x$. Similarly, we have $F(B)=(B^T)^{-1}=\begin{pmatrix} 1 & 0 \\ 0 & 1/2\end{pmatrix}$ and $Bv=v$, so $F(B)F(v)=F(v)$ which means $x=0$. Thus $F(v)=0$. But $wv$ is the $1\times 1$ matrix $1$, so $F(wv)=1$ as well. This is a contradiction since $F(wv)=F(w)F(v)=0$.

1
On

IMO, the key feature of the power set example is that $\mathcal{P}$ is secretly a functor from sets to complete lattices, and the category of complete lattices is actually a $2$-category and morphisms have adjoints. The covariant and contravariant power set functors are just applying the forgetful functor to different halves of an adjunction.

I doubt you can set up anything analogous for vector spaces. However, I believe you can do so for some categories of inner product spaces: for example, any morphism of Hilbert spaces has an adjoint (in the linear algebra sense) which lets you manage the same setup.

(disclaimer: my functional analysis is rusty so the above may not actually be a fact, and if it is a fact it may be under unnecessarily restrictive conditions)

I suspect this restriction is more closely related to your intuition than your actual question — for example, it doesn't even make sense to talk about $V^*$ as a completion of $V$ unless you actually have a morphism $V \to V^*$, and such a thing usually comes in the form of a transpose with respect to an inner product: i.e. the map $v \mapsto \langle -, v \rangle$.

Aside: IIRC there are indeed a number of analogies to be made between $\hom(-, -)$ for categories and $\langle -, - \rangle$ for inner product spaces.