The contravariant endofunctor $$\mathbf{Set}^{op} \rightarrow \mathbf{Set}$$ $$X \mapsto [X,2]$$
can be made into a covariant endofunctor $$\mathbf{Set} \rightarrow \mathbf{Set}$$ $$X \mapsto \mathcal{P}(X)$$
in such a way that:
- these functors do the same thing to objects
- they do the same thing to isomorphisms, excepting that $X \mapsto [X,2]$ flips the direction.
This isn't too surprising, since $X \mapsto [X,2]$ can be seen as "completing" a set to a suplattice, and then forgetting the suplattice structure. It's probably fair to say that the covariant structure on this functor comes essentially comes from this fact (though perhaps there is a better way at looking at it).
Anyway, I'm wondering if we can do something similar to the dual-vectorspace functor
$$\mathbb{R}\mathbf{Mod}^{op} \rightarrow \mathbb{R}\mathbf{Mod}$$
$$X \mapsto \mathbb{R}\mathbf{Mod}(X,\mathbb{R}).$$
It seems likely that we can, since I tend to think of dual vector spaces as "completing" a vector space to a gadget in which certain infinite sums exist, and then forgetting this extra structure.
No, this is not possible. Let's restrict our attention to just finite-dimensional vector spaces, and the skeleton of this category whose objects are $\mathbb{R}^n$ and whose morphisms are matrices. Your question then is: is there an operation $F$ which sends $m\times n$ matrices to $m\times n$ matrices for all $m$ and $n$, preserves multiplication of matrices, and satisfies $F(A)=(A^T)^{-1}$ when $A$ is a square invertible matrix?
Let's suppose $F$ is such an operation and do some computations with it. Let $v=\begin{pmatrix} 1 \\ 0\end{pmatrix}$, $A=\begin{pmatrix} 1 & 1 \\ 0 & 1\end{pmatrix}$, and $B=\begin{pmatrix} 1 & 0 \\ 0 & 2\end{pmatrix}$, and $w=\begin{pmatrix} 1 & 0\end{pmatrix}$.
We have $F(A)=(A^T)^{-1}=\begin{pmatrix} 1 & 0 \\ -1 & 1\end{pmatrix}$. We also have $Av=v$, and hence $F(A)F(v)=F(Av)=F(v)$. This means that $F(v)$ must have the form $\begin{pmatrix} 0 \\ x\end{pmatrix}$ for some $x$. Similarly, we have $F(B)=(B^T)^{-1}=\begin{pmatrix} 1 & 0 \\ 0 & 1/2\end{pmatrix}$ and $Bv=v$, so $F(B)F(v)=F(v)$ which means $x=0$. Thus $F(v)=0$. But $wv$ is the $1\times 1$ matrix $1$, so $F(wv)=1$ as well. This is a contradiction since $F(wv)=F(w)F(v)=0$.