Proof Explanation: Let $R$ be the ring of $2 × 2$ matrices with rational entries. Prove that the only ideals of $R$ are $(0)$ and $R.$

121 Views Asked by At

Let $R$ be the ring of $2 × 2$ matrices with rational entries. Prove that the only ideals of $R$ are $(0)$ and $R.$

This was a question from the book, "Topics in Algebra " by I.N Herstein in the chapter Ring Theory (Page number-136; 2nd Edition).

I know this question is asked a lot of times in this site. But the thing is, nearly every answer, is a reformulation or more or less a variation of the following solution:

Suppose $I$ is an ideal of $R$ and $0\neq A ∈ I.$ Let $α$ be a non-zero entry in $A,$ and assume that it lies in row $r$ and column $s.$ Let $E_1 ∈ R$ have all entries $0$ except for the $(1, r)$ entry which is $1.$ Let $E_2 ∈ R$ have all entries $0$ except for the $(s, 1)$ entry which is $1.$ Since $I$ is a (two sided) ideal, $B = E_1 AE_2 ∈ I.$ But $B$ is the matrix with all entries $0$ except for the $(1, 1)$ entry which is $α.$ Using a similar argument, we conclude that $C ∈ I,$ where $C$ is the matrix with all entries $0$ except for the $(2, 2)$ entry which is $α.$ Thus $B + C ∈ I.$ Since $B + C$ is invertible, we conclude that $I = R.$

Now, the part where they mention, "But $B$ is the matrix with all entries $0$ except for the $(1, 1)$ entry which is $α.$" appears sketchy or blurry. This is because, I don't get how to "know" that how this works, I mean I dont understand or am not being able to really produce a picture in my mind, that multiplying $E_1,E_2$ with $A$ as $E_1AE_2$ gives such a matrix $B.$ I know that I can verify this by checking all the cases but that is highly laborious and I think that's not what the solution asks the reader to think of. Is there any way, I can know that multiplying $A$ with the matrices $E_1,E_2$ appropriately gives us, a mayrix like $B.$

Also, I don't know how to get a deep confirmation from my mind, about the fact: "Using a similar argument, we conclude that $C ∈ I,$ where $C$ is the matrix with all entries $0$ except for the $(2, 2)$ entry which is $α.$"

I am just looking for some way to make these things look very obvious to me. I don't even know if I am being able to express myself clearly or not, but any help regarding this issue will be highly appreciated.

3

There are 3 best solutions below

0
On

To understand this, you must look at the operations of the four matrices $$E_1=\begin{pmatrix}1 & 0\\0&0\end{pmatrix}\\E_2=\begin{pmatrix}0 & 1\\0&0\end{pmatrix}\\E_3=\begin{pmatrix}0 & 0\\1&0\end{pmatrix}\\E_4=\begin{pmatrix}0 & 0\\0&1\end{pmatrix}$$ acting on both the right and left sides of a matrix $A=\begin{pmatrix}a&b\\c&d\end{pmatrix}$

You have to write the left multiplications and right multiplications out so you can get this intuitively.

Multiplying $A$ on the left by $E_1$ will give a matrix with $A$'s first row in the first row and zeroes in the second row. Multiplying on the right by $E_1$ will give a matrix whose first column the first column of $A$ with zeroes in the second column.

If you write out the eight multiplications you will see that you can create a matrix from $A$ that places either of $A$'s two rows in any row in the result and either of $A$'s columns in any column. Thus, as long as one of $A$'s entries is non-zero you can move that entry to $(1,1)$ and by a separate operation you can move it to $(2,2)$. Now you can invert the matrix to show that the ideal contains the identity matrix which means the ideal is equal to $R$.

For example, suppose $a\ne 0$ in $A$. Then $$E_1A=\begin{pmatrix}a&b\\0&0\end{pmatrix}$$ and $$E_1AE_1=\begin{pmatrix}a&0\\0&0\end{pmatrix}$$

Likewise, $$E_3AE_2=\begin{pmatrix}0&0\\0&a\end{pmatrix}$$ Adding those together gives $aI$ and multiplying that by $\frac{1}{a}I$ gives the identity matrix. Since the identity is in the ideal, the ideal is all of $R$.

0
On

It may be helpful to see a proof of the equivalent statement about the ring $R=\mathrm{End}(V)$ of linear self-maps of a two-dimensional $\mathbb Q$-vector space $V$.

Suppose that $I$ is a two-sided ideal of $R$. We must show that if $I$ is not equal to $\{0\}$ then $I=R$. Thus suppose $\alpha\in I\backslash\{0\}$. If $\alpha$ is invertible, then $I$ contains a unit, and hence equals $R$. If $\alpha \neq 0$ and is not invertible, then $\dim(\ker(\alpha))=\dim(\text{im}(\alpha)) =1$. Thus we may find nonzero vectors $u_0,u_1 \in V$ with $\alpha(u_0)=0$ and $\alpha(u_1)=v_1\neq 0$. It follows that $\{u_0,u_1\}$ is a basis of $V$, and we may find $v_0 \in V$ with $\{v_0,v_1\}$ another basis of $V$. Thus we may define invertible linear maps $\gamma_1,\gamma_2\in R$ such that $\gamma_1(u_0) = u_1$, $\gamma_1(u_1)=u_0$ , and $\gamma_2(v_0)=v_1, \gamma_2(v_1)=v_0$.

Then $\gamma_2\circ \alpha\circ \gamma_1(u_0)=v_0$, $\gamma_2\circ \alpha \circ \gamma_1(u_1)=0$. But then $\alpha+\gamma_2\circ \alpha\circ \gamma_1\in I$ maps the basis $\{u_0,u_1\}$ to the basis $\{v_0,v_1\}$, and hence is invertible, so that $I=R$ as required.

0
On

If $C=(c_{ij})$ is an $n\times p$ matrix and $D=(d_{ij})$ is a $p\times m$ matrix, then the $(r,s)$ entry of $CD$ is $$\sum_{k=1}^p = c_{rk}d_{ks} = c_{r1}d_{1s} + c_{r2}d_{2s} + \cdots + c_{rp}d_{ps}.$$ Now, suppose that $E_{uv} = (e_{ij})$ is an $n\times n$ matrix whose entries are $0$, except for the $(u,v)$ entry which is $1$. The $(r,s)$ entry of $E_{uv}C$ is $$\sum_{k=1}^{n} e_{rk}c_{ks}.$$ Since $e_{rk}=0$ unless $(r,k)=(u,v)$, then only the entries in row $u$ can be nonzero; the $(u,s)$ entry will be $$\sum_{k=1}^n e_{uk}c_{ks} = e_{uv}c_{vs}.$$ So, the $(u,1)$ entry is $c_{v1}$, the $(u,2)$ entry is $c_{v2}$, etc.

In summary:

The result of multiplying $E_{uv}$ by $C$ is the matrix whose $u$th row is the $v$th row of $C$, and every other entry is $0$.

Now suppose $E_{uv}=(e_{ij})$ is the $m\times m$ matrix whose entries are all $0$, except for entry $(u,v)$ which is $1$. The $(r,s)$ entry of $DE_{uv}$ is $$\sum_{k=1}^m d_{rk}e_{ks}.$$ Since $e_{ks}=0$ unless $(k,s)=(u,v)$, every entry except those of the form $(r,v)$ are equal to $0$; that is, every column except perhaps for the $v$th column is $0$. In the $v$th column, the $(r,v)$ entry is $$\sum_{k=1}^m d_{rk}e_{kv} = d_{ru}.$$ In summary:

The matrix $DE_{uv}$ is a matrix in which the $v$th column is the $u$th column of $D$, and every other entry is equal to $0$.

Now, in your situation you have $A$; you left multiply by $E_{1r}$ and you right multiply by $E_{s1}$. Let's see what that does:

  1. The result of computing $E_{1r}A$ is a matrix where the $1$st row is the $r$th row of $A$, and every other row is $0$. So this matrix looks like $$B=\left(\begin{array}{cccc} a_{r1} & a_{r2} & \cdots & a_{rn}\\ 0 & 0 & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 0 \end{array}\right).$$
  2. The result of computing $(E_{1r}A)E_{s1} = BE_{s1}$ is the matrix whose $1$st column is the $s$th column of $B$, and every other entry is $0$. The $s$th column of $B$ is $$\begin{array}{c}a_{rs}\\0\\\vdots\\0\end{array}$$ so that $E_{1r}AE_{s1}$ is $$\left(\begin{array}{cccc} a_{rs} & 0 & \cdots & 0\\ 0 & 0 & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & 0 \end{array}\right).$$