I was working out the details of the following problem as I was preparing for a qualifying exam:
Problem:
Let $R$ be a unital ring (not necessarily commutative). Prove that if the left free $R$-modules, $R^n$ and $R^m$ are isomorphic for some positive integers $n$ and $m$, then $R^n$ and $R^m$ are isomorphic as right $R$-modules.
This question has been asked before, but the answer is very short and doesn't work out the details. While working out the details, I've encountered some confusion.
Since the answer given by Lord Shark the Unknown is short, I'll reproduce it here before asking about the pieces I've found myself confused about.
Lord Shark the Unknown's answer:
If $\phi:R^m\to R^n$ is a left $R$-module isomorphism, and $\psi:R^n\to R^m$ is its inverse, then they correspond to matrices $A$ and $B$ over $R$ with $AB=I_m$ and $BA=I_n$. But then $A$ and $B$ correspond to right $R$-module maps $R^n\to R^m$ and $R^m\to R^n$ which are inverse to each other.
My work:
Minor comment, it appears that $\phi$ is intended to correspond to $A$ and $\psi$ to $B$, so I would think that $AB$ should correspond to $\phi \circ \psi= 1_{R^n}$. Thus I'll assume that $\phi$ should be $\phi:R^n\to R^m$ and $\psi:R^m\to R^n$. It's quite possible that something weird happens with noncommutative rings, and this was correct as is, and I'm missing something. (Later comment: It's also possible Lord Shark the Unknown was working with the transposes of the matrices that I'm thinking of, in which case these dimensions make sense).
Then let $e_1,\ldots,e_n$ be the standard basis for $R^n$, $f_1,\ldots,f_m$ the standard basis for $R^m$. Let $A=[\phi]$ be defined by $$\phi(e_j)=\sum_i A_{ij}f_i,$$ and $B=[\psi]$ be defined by $$\psi(f_i)=\sum_j B_{ji}e_j.$$
Ignoring that $\phi\circ \psi = 1_{R^m}$, $C:=[\phi\circ \psi]$ should be the matrix such that $$\phi(\psi(f_i))=\sum_k C_{ki}f_k,\newcommand\of[1]{\left({#1}\right)}$$ but $$\phi(\psi(f_i)) = \phi\of{\sum_j B_{ji}e_j} = \sum_j B_{ji}\phi(e_j) =\sum_j B_{ji} \sum_k A_{kj}f_k =\sum_k \of{\sum_j B_{ji}A_{kj}}f_k.$$ Thus $C_{ki} =\sum_j B_{ji}A_{kj}$. Hence $B^TA^T = C^T$. Alternatively, if we regard $A$ and $B$ as being matrices over $R^{\text{op}}$, we get $AB=C$, as claimed.
Now over $R^{\text{op}}$ we get $AB=I_m$, $BA=I_n$, or over $R$, we get $B^TA^T=I_m$, and $A^TB^T=I_n$. This suggests that we should use the transposes to define the maps for the right modules, since right linear maps won't reverse the order of multiplication. (If $\phi(v)=ws$, $\psi(w)=ur$, then $\psi(\phi(v))=\psi(ws)=\psi(w)s=urs$).
Then if we define $$\tilde{\phi}(e_j) =\sum_i f_i B_{ji}\text{, and } \tilde{\psi}(f_i) =\sum_j e_j A_{ij},$$ we can check that $$\tilde{\phi}(\tilde{\psi}(f_i)) = \tilde{\phi}\of{\sum_j e_j A_{ij} } = \sum_j \tilde{\phi}(e_j) A_{ij} = \sum_j \sum_k f_kB_{jk}A_{ij} = \sum_k f_k \delta_{ik} = f_i, $$ and similarly, we get $\tilde{\psi}(\tilde{\phi}(e_j))=e_j$, so $\tilde{\phi}$ and $\tilde{\psi}$ are inverse isomorphisms.
Questions:
- Is this the standard way to handle matrices over noncommutative rings? I.e., for left modules do we usually take the entries to lie in $R^{\text{op}}$? For right modules it appears that the entries lie in $R$. Then taking transposes gives an isomorphism between $\newcommand\op{\text{op}}\newcommand\Mat{\mathrm{Mat}}\Mat_{n\times m}(R^{\text{op}})$ and $\Mat_{m\times n}(R)$? Is this correct, and is it the standard way to think about these things?
- If anyone could let me know if I've understood the intent of Lord Shark the Unknown's answer, or if I'm misunderstanding, that would be very helpful.
- It feels like there should be a more conceptual way of thinking about what's going on here, by translating the matrix argument into an argument about $\operatorname{Hom}$ functors/dualization. Something like the following:
Let $\phi: R^n\to R^m$ and $\psi: R^m \to R^n$ be inverse isomorphisms. Let $*$ denote the functor $\newcommand\Hom{\operatorname{Hom}}\Hom(-,R)$. Then $\psi^*:R^{n*}\to R^{m*}$ and $\phi^*:R^{m*}\to R^{n*}$ are inverse isomorphisms. $R^{n*}$ has a natural right $R$-module structure so that $R^{n*}\simeq R^n$ as right $R$-modules.
The natural right $R$-module structure should be simply right multiplication by elements of $R$. I.e., if $\alpha \in \Hom(R^n,R)$, and $s\in R$, then define $(\alpha s)(x) = \alpha(x)s$. As for the natural isomorphism with $R^n$, it should be given by $\alpha \mapsto (\alpha(e_i))_i$. Right linearity follows from the definition of the right action of $R$ on $\Hom(R^n,R)$, injectivity follows from the fact that the $e_i$ generate $R^n$, and surjectivity follows from the existence of $f_j$ such that $f_j(e_i)=\delta_{ij}$, since $R^n$ is free.
Is this idea correct?
Using $$ \operatorname{Hom}({}_RR^m,{}_RR_R)\cong R_R^m $$ is much simpler.
Since $\operatorname{Hom}({-},{}_RR_R)\colon R\operatorname{\!-Mod}\to\operatorname{Mod-\!}R$ is a (contravariant) functor, it sends isomorphisms to isomorphisms.
The converse follows from symmetry.