Torsionfree modules over PIDs

196 Views Asked by At

We first recall that given any principal ideal domain $R$ and any finitely generated torsionfree $R$ module $M$, then $M$ is free and there are quite a handful of proofs of this on the internet and in the relevant texts. However suppose we are given a set of finite number of elements of $M$ that spans $M$, which need not be linearly independent, is there any algorithm with which one may obtain a corresponding set of basis for $M$ and thereby proving that $M$ is a free $R$ module? For instance we might have had 2 linearly independent elements $m_1$ and $m_2$ of $M$ but without loss of generality suppose $r_1 m_1 + r_2 m_2=r_3 m_3$ where we may also assume that the ideal generated by coefficients is $R $ itself by the torsionfree assertion. I am not too sure if thats related to the Smith normal form as I have not looked at that in detail yet. Thank you

3

There are 3 best solutions below

0
On

First of all, there are two questions I want to answer.

Question 1: if you have an $R$-module $V$ along with a spanning set, can you algorithmically use this to find a basis for $V$?

Answer 1: no, because if $V$ is not free, no basis exists.


Question 2: if you have a finite spanning set of $V$, can you somehow use it to determine whether $V$ is free?

Answer 2: Yes:

Theorem: suppose $V$ is an $R$-module with spanning set $v_1,\ldots, v_m$. Then $V$ is free if and only if $R v_i\cong R$ for all nonzero $v_i$.

Proof: if $V$ is free, then any nonzero element comprises a basis of a rank-$1$ submodule, and this $Rv_i\cong R$. If $V$ is not free, then by the classification of finitely generates modules over a PID, we can write $V\cong F\oplus T$, where $F$ is a free module and $T$ is a torsion module. Some nonzero element $v_i$ of the spanning set must lie in the torsion summand, and thus $Rv_i\not\cong R$.


Question 3: if you have a free $R$-module $F$ along with a spanning set, can you algorithmically use this to find a basis?

Answer 3: Yes! Details below.

There is an algorithmic process, and it does indeed have to do with Smith normal form. Unfortunately, it is considerably worse than the typical Gram-Schmidt process we have for fields, so it is not great to do by hand except in small cases.

Theorem: if $M$ is an $m\times n$ matrix over a PID $R$, then there exists an $m\times m$ matrix $A$ and and $n\times n$ matrix $B$, both of which are invertible, such that $AMB$ is in Smith normal form. That is, $AMB$ is a diagonal matrix with entries $\delta_1,\ldots, \delta_m$, some of which may be $0$ (placed at the end). Further, we can explicitly find these $\delta_i$ for any such $M$ using an algorithm. (reference)

Now suppose that, for some PID $R$, you have a free module $F$ of rank $n$. Since $F\cong R^n$, we will just assume that $F=R^n$. Suppose you also have a spanning set $\{v_1,\ldots, v_m\}\subset F$. Since the $R$-span of this set is $F=R^n$, and $R$ has invariant basis number, we know that $m\ge n$.

Consider the $n\times m$ matrix $M$ whose columns are given by the $v_i$. We can consider $M$ as a map $R^m\to F$. By our theorem, there exist invertible $n\times n$ and $m\times m$ matrices $A$ and $B$ such that $AMB$ is in Smith normal form. Note that $AMB$ is a map $R^m\to R^n$, and since since $A$ and $B$ are both invertible, the column space of $AMB$ is the same as that of $M$. Further, $AMB$ is diagonal, so it must be of the form

$$\begin{pmatrix} \delta_1 & 0 & 0 & \cdots & & & \cdots & 0\\ 0 & \delta_2 & 0 & \cdots & & & \cdots & 0\\ 0 & 0 & \delta_3 & \cdots & & & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & & & & 0\\ 0 & 0 & 0 & 0 & \delta_n & 0 & \cdots & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & \cdots & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & \cdots & 0\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots &\vdots\\ 0 & 0 & 0 & 0 & 0 & 0 & \cdots & 0\\ \end{pmatrix}$$

Call the columns of this matrix $w_i$. It is clear that the $w_n,\ldots, w_n$ are linearly independent, span $R^n$, and are thus a basis of $R^n$. Now multiplication by $A^{-1}$ takes this basis to a basis of $R^n=F$.

0
On

For $R$ an integral domain, $K = Frac(R)$,

  • $M$ is finitely generated torsionfree $R$-module means $ MK$ is a finite $n$-dimensional $K$ vector space, taking a basis we obtain $$MK = K^n,\qquad M = A R^m,\qquad A \in R^{n \times m}$$

  • Given a sub $R$-module $B R^d\subset K^n$, let $D = \dim BK^d$,

    that $R$ is a PID means that for any $v \in B K^d$, $vK \cap B R^d = v\alpha R$ for some $\alpha \in K$,

    The quotient space $K^n / v K$ is a $n-1$ dimensional vector space and $BR^d / vK$ is a torsionfree $d-1$-dimensional $R$-module, by induction on $d-1$ we obtain a free $R$-module basis $$BR^d / vK = \sum_{j=1}^{D-1}( w_j+vK) R, \qquad w_j \in K^n$$ there are some $\beta_j\in K$ such that $w_j-v \beta_j\in BR^d$ then $$B R^d = v\alpha R + \sum_{j=1}^{D-1}(w_j-v \beta_j)R$$

  • Again since $R$ is a PID $\alpha R+ R = \gamma R$ and hence $$BR^d + vR = v\gamma R+\sum_{j=1}^{D-1}(w_j-v \beta_j)R$$

  • Assume the $n$ first columns of $A$ generate $K^n$, let $B$ be the $n$ first columns of $A$, let $v$ be the $n+1$-th column, we obtain a basis for the sub $R$-module spanned by the $n+1$ first columns of $A$, and doing so iteratively with the next columns we obtain a free $R$-module basis for $M$.

0
On

Thanks for all the above answers. Actually I have thought of this question to prove that all finitely generated torsionfree $R$ modules over a PID are free (I have a habit of not looking at proofs in the textbooks first so that was my rather inefficient approach). I have just realised something now that I have overlooked that would probably answer my own question (as mentioned I am not sure if this bears a resemblance with the Smith normal form) but I hope this works. Please kindly comment if this is flawed or wrong.

For some module $M$ that fulfills the requirements above we let $\{m_1,\dots, m_n\}$ be set of elements of $M$ that generates $M$. We inductively construct a linearly independent set of elements that still spans $M$ as such:

The base step is trivially true so at the $j$th step we suppose we have obtained linearly independent elements $\{m_1^{'},\dots, m_k^{'}\}$ ($j-1\geq k$) and however there exists non trivial $r\in R$ such that $rm_j\in \langle m_1^{'},\dots m_k^{'}\rangle$. It is easy to see that such elements $r$ forms an ideal and there exists a single generator $r_j$ of this ideal. Then we suppose we have the following relation $$\displaystyle\sum_{i=1}^k r_i m_i^{'} = r_jm_j$$ and by the torsionfree assumption we may assume $\langle r_1,\dots r_k, r_j\rangle = R$. Let $\langle r_1,\dots, r_k\rangle = \langle r^*\rangle$ (not necessarily equals $R$ - thats the part I got stuck) and let $\displaystyle k_jr_j+\sum_{i=1}^k k_ir_i = 1$ for $k_i\in R$. Consider the elements $\{k_im_j + k_jm_i^{'}\}_{i=1}^k$. It is clear $\displaystyle\sum_{i=1}^k r_i (k_im_j+k_jm_i) = (1-r_jk_j)m_j + k_j(r_1m_1^{'}+\dots +r_km_k^{'}) = m_j$. Note for each $i$ there exists some $r_i^*$ such that $r_i^*r^*= r_i$ we may set $m^*_j = \displaystyle\sum_{i=1}^k r_i^* (k_im_j+k_jm_i)$ so clearly $r^*m^*_j= m_j$ and we thereby have a new relation (once again invoking torsionfreeness) $$\displaystyle \sum_{i=1}^k r_i^*m_i^{'} = r_jm^*_j$$ where this time round we may invoke the fact that there exists some $\{k_i^*\}_{i=1}^k\subset R$ such that $\displaystyle\sum_{i=1}^k k_i^*r_i^* = 1-r_j$ such that $r_j + \displaystyle\sum_{i=1}^k k_i^*r_i^*=1$. We now claim that $\{k_i^*m_j^* + m_i^{'}\}_{i=1}^k $ forms a basis for $\langle m_1,\dots, m_j\rangle = \langle m_1^{'},\dots, m_k^{'}, m_j\rangle$.

  1. Spanning the submodule:

Notice $\displaystyle\sum_{i=1}^k r_i^*(k_i^*m_j^{*} + m_i^{'}) = m_j^*$ and so $m_j^*\in S$ where $S$ is the submodule spanned by our purported basis elements. Then clearly $m_i^{'}\in S$ for each $i$ subtracting $k_i^*m_j^*$ from the corresponding elements of $S$. $m_j^*\in S\implies m_j\in S$ is trivial.

  1. Linear Independence: Suppose there exists $q_i\in R$ such that $\displaystyle \sum_{i=1}^k q_i(k_i^*m_j^* + m_i^{'}) = 0$. Then we may assume $q_i = \alpha r_i^*$ and $\sum q_i k_i^* = \alpha r_j$ but this yields $\alpha m_j^* = 0$ which implies $\alpha=0$ , and so are all $q_i$.