Difficulty understanding the proof for showing: $\sum_{i\in I}M_{i}=\sum_{k\in A}(\sum_{i\in I_{k}}M_{i})$ for index family of modules

62 Views Asked by At

I have some question about the proof of the following corollary (Corollary 2 in text) which is taken from the book: Modules an approach to Linear algebra, 1st edition, by Thomas Blyth, page 13.

Corollary 2: If $(I_{k})_{k\in A}$ is a family of non-empty subsets of $I$ with $I=\cup_{k\in A}I_{k}$ then $\sum_{i\in I}M_{i}=\sum_{k\in A}(\sum_{i\in I_{k}}M_{i})$.

Proof: A typical element of the right-hand side is $\sum_{k\in J}(\sum_{i\in J_{k}}m_{i})$ where $J_{k}\in P^{*}(I_k)$ and $J\in P^{*}(A)$. By associativity of addition in $M$ this can be written $\sum_{i\in K}m_{i}$ where $K=\cup_{k\in J}J_{k}\in P^{*}(I).$ Thus the right-hand side is contained in the left-hand side.

As for the converse inclusion, a typical element of the left-hand side is $\sum_{i\in J}m_{i}$ where $J\in P^{*}(I)$. Now $J=J\cap I=\cup_{k\in A}(J\cap I_{k})$ so that if we define $J_{k}=J\cap I_{k}$ we have $J_{k}\in P^{*}(I_{k})$ and, by the associativity of addition in $M$, $\sum_{i\in J}m_{i}=\sum_{k\in B}(\sum_{i\in J_{k}}m_{i})$ where $B\in P^{*}(A)$. Thus the left-hand side is contained in the right-hand side.

The definition, theorem and its first corollary preceding the corollary 2 are as follows:

Definitions: We say that an $R$-module $M$ is generated by the subset $S$ (or that $S$ is a set of generators of $M$) if the sub-module of $M$ generated by $S$ coincides with $M$. By a finitely-generated $R$-module we mean an $R$-module which has a finite set of generators.

Suppose that $(M_{i})_{i\in I}$ is a family of submodules of an $R$-module $N$ and consider the submodule of $M$ generated by $\cup_{i\in I}M_{i}.$ This is the smallest submodule of $M$ to contain every $M_{i}$. By abuse of language it is often referred to as the submodule of $M$ generated by the family $(M_{i})_{i\in I}$ of submodules; it is characterized as follows.

Theorem: Let $(M_{i})_{i\in I}$ be a family of submodules of an $R$-module $M$. If $P^{*}(I)$ denotes the set of all non-empty finite subsets of $I$ then the submodule of $M$ generated by $\cup_{i\in I}M_{i}$ consists of all (finite) sums $\sum_{j\in J}m_{j}$ where $J\in P^{*}(I)$ and $m_j\in M_j$ for every $j\in J$

The submodule generated by the family $(M_{i})_{i\in I}$ is called the sum of the family and is denoted by $\sum_{i \in I}M_{i}$. With this notation we have the following immediate consequences of the above.

Corollary 1. If $\sigma:I\rightarrow I$ is a bijection then
$$\sum_{i\in I}M_{i}=\sum_{i\in I}M_{\sigma(i)}$$

Main issues for why the proof is difficult to understand:

I am having trouble understand how $P^{*}(I_k)$ is related to the definition of $P^{*}(A)$ for when I try to figure out the associativity of addition in $M$.

To be more specific; in the forward direction the proof begins with "A typical element of the right-hand side is $\sum_{k\in J}(\sum_{i\in J_{k}}m_{i})$ where $J_{k}\in P^{*}(I_k)$ and $J\in P^{*}(A)$. By associativity of addition in $M$ this can be written $\sum_{i\in K}m_{i}$ where $K=\cup_{k\in J}J_{k}\in P^{*}(I).$" Basically Blyth is saying:

take an element $\sum_{k\in J}(\sum_{i\in J_{k}}m_{i})\in \sum_{k\in A}(\sum_{i\in I_{k}}M_{i})$, by associativity of addition in $M$, $\sum_{k\in J}(\sum_{i\in J_{k}}m_{i})=\sum_{i\in K}m_{i}=\sum_{i\in {\cup_{k\in J}J_{k}}}m_{i}\in\sum_{i\in I}M_{i}.$ So $\sum_{k\in A}(\sum_{i\in I_{k}}M_{i})\subset \sum_{i\in I}M_{i}$

Associativity of addition means $a+(b+c)=(a+b)+c$. Since Blyth have not flush out in details of my quoted lines of his proof. I tried to work it out myself. I having difficulty seeing it because I can't work out the ambiguity from the definition of the powerset $P^{*}(I_k)$ in relations to $P^{*}(A)$, Since $P^{*}(I_k)=\{J_{k}: J_{k}\subset I_{k}\}$. and $P^{*}(A)=\{J:J\subset A\}.$ Do all the subsets $J$ of $A$ consists of subcollections of $I_{k}$, But according to the definition of $ P^{*}(I_k)$, each $I_{k}$ can also contain subcollections of $J_{k}$, and I am not sure if it is for the same $k$ only. If it is for the same $k$, then $I_{k_{0}}$ can only have one subset $J_{k_{0}}$ for some $k_{0}\in A$. I tried using the following example:

Example: Let $A=\{1,2,\dots,12\}$ and $I=\{I_{1},I_{2},\dots,I_{12}\}$, The collections of $\{I_{k}\}_{k\in A}$ are the subsets of $A$. (Not sure if the subsets has to partition $A$) Let $M=\{M_{1}, M_{2}, \dots, M_{12}\}$. For any $J\subset A$, if $k\in J$ implies $I_{k}=\{J_{k}\}_{k\in J}$ But $J_{k}\subset I$ according to $J_{k}\in P^{*}(I)$. So the $J_{k}$s refer to the $J$s which are subsets of $A$.

For simplicity sake, suppose the subsets of $A=\{\{1,2,3\},\{4,5,6\},\{7,8,9\},\{10,11,12\},\{6,7,8\},\{8,10,11\},\{3,4\},\{5,6\},\{2\},\{9\},\{10\},\{11\},\{12\}\}$, then the subsets $J$ of $A$, consists of:

$I_{1}=\{1,2,3\}$
$I_{2}=\{4,5,6\}$
$I_{3}=\{7,8,9\}$
$I_{4}=\{10,11,12\}$
$I_{5}=\{6,7,8\}$
$I_{6}=\{8,10,11\}$
$I_{7}=\{3,4\}$
$I_{8}=\{5,6\}$
$I_{9}=\{2\}$
$I_{10}=\{9\}$
$I_{11}=\{10\}$
$I_{12}=\{11\}$

Then do we have $J_{1}\subset I_{1},$ $J_{2}\subset I_{2},\ldots ,J_{12}\subset I_{12}$ where we have either of the following:

$(1):$ for each $p=\{1,2,\cdots ,12\}$ and say $p=1$, $I_{1}=\{1,2,3\}$, and $J_{1}=\{1,2,3\},$ $J_{2}=\{1,2\},$ $\cdots$ ,$J_{6}=\{2\}.$ and all of the $J_{i}\subset I_{1}$ in other words, $|J_{1}|\leq |I_{1}|$ or

$(2):$ $I_{1}=\{1,2,3\}=J_{1},$ and similarily for all $p\in A$, $|I_{p}|=|J_{p}|$ if $|I_{p}|\geq 1$

Thank you in advance.

1

There are 1 best solutions below

2
On BEST ANSWER

The $I_k$'s and $J_k\subset I_k$ are not subsets of $A$ but of $I.$ The set $A$ is just a set of indices $k,$ which serves to index them. Your example is confusing because there, $A=I.$ Let me take a simpler example.

$I=\{1,2,3,4,5\},$ $A=\{Blue, White, Red\},$ $I_{Blue}=\{1,2,4\},$ $I_{White}=\{1,3,4\},$ and $I_{Red}=\{2,5\}.$ Thus, $I=\cup_{k\in A}I_k.$ (You don't specify that Blyth assumes the $I_k$'s are pairwise disjoint, and I trust you. This is a bit annoying but we will fix that later.)

In our example(s), $A$ and $I$ are finite, so their respective $P^*$'s (sets of their finite subsets) are their $P$'s (power sets). In that case, $J$ (which I prefer to call $B$ from now on, because it is not of the same nature as the $J_k$'s) might be taken equal to $A$ and $J_k$ to $I_k.$ But in Blyth's general case, an element of a (possibly infinite) sum of modules is a finite sum of elements of some of these modules. So let us forget $A$ and $I$ are finite here, and let us mimic Blyth's care of taking finite subsets of indices.

A "typical element" $m\in\sum_{k\in A}\left(\sum_{i\in I_k}M_i\right)$ is a (finite) sum of (finite) sums, i.e. of the form $\sum_{k\in B}\left(\sum_{i\in J_k}m_i\right),$ where $B$ is a (finite) subset of $A$ and each $J_k$ is a (finite) subset of $I_k.$ E.g. $B=\{Blue,White\},$ $J_{Blue}=\{1,2\}$ and $J_{White}=\{1,3\}.$

Now comes the slight trouble that our $J_k$'s happen to be not disjoint (because we didn't forbid the $I_k$'s to intersect). Blyth's notation is incorrect, it would lead to $m=(m_1+m_2)+(m_1+m_3).$ We can repair it here by writing $m=(m_1+m_2)+(n_1+m_3).$ (In the general case, the $m_i$'s should be doubly indexed, to keep track of from which $J_k$ their index $i$ comes, so $m$ should rather be written $\sum_{k\in B}\left(\sum_{i\in J_k}m_{k,i}\right).$)

By associativity (and commutativity) of addition, our $m$ equals $m'_1+m_2+m_3$ where $m'_1=m_1+n_1$ (here again, we had to "fix" Blyth's notation). And $K=\cup_{k\in B}J_k=\{1,2,3\}.$ (In the general case we would write $m=\sum_{i\in K}m'_i$ where $m'_i=\sum_{k\in B\text{ such that }i\in J_k}m_{k,i}.$)