showing that $G$ does not have an embedding in $GL_n(F)$ for any $n\ge 1$ and field $F$

182 Views Asked by At

Let $G$ denote the group of bijective maps $g : \mathbb{Z}\to \mathbb{Z}$ such that $g$ fixes all but finitely many integers. Show that there does not exist a field $F$ and an $n\ge 1$ so that $G$ is isomorphic to a subgroup of (embeds in) $GL_n(F),$ the set of invertible $n\times n$ matrices with entries in $F$.

I know that for any field $F$, every finite group embeds into $GL_n(F)$ for some $n\ge 1.$ I think it might be useful to use a contradiction here. So suppose there exists an $n\ge 1$ and a field $F$ so that $\phi : G\to S$ is an isomorphism, where $S$ is a subgroup of $GL_n(F),$ denoted $S\leq GL_n(F).$ There are various properties of isomorphisms that may be useful; for instance, the order of $g\in G$ equals the order of $\phi(g)$ in $S$, $S$ is abelian if and only if $G$ is abelian, etc. I'm not sure how to use the properties of $G$ and $\phi$ to obtain a contradiction here though.

Edit: I was wondering if @JyrkiLahtonen could elaborate on his answer? I think I mostly understand it, but I don't get a few details. Below is my understanding of his answer.

$G_2$ is abelian because the permutations $\sigma_i$ and $\sigma_j$ commute for any $i$ and $j$ and any element of $G_2$ is of the form $\sigma_{i_1}^{a_{i_1}}\cdots \sigma_{i_n}^{a_{i_n}}$ for some $i_j \ge 1, a_{i_j} \in \mathbb{Z}\,\forall i_j$ (so $\sigma_{a}^{b}\sigma_{c}^{d} = \sigma_{c}^{d}\sigma_{a}^{b}$ for any $b,d \in\mathbb{Z}, a,c\ge 1$). It is an infinite direct sum of cyclic groups of order two because for any $i$ the cyclic group $\langle \sigma_i \rangle = \{e, \sigma_i\}$ has trivial intersection with the subgroup $\langle \{ \langle \sigma_j\rangle : j\neq i\}\rangle$, $G_2$ is abelian so every subgroup $\langle \sigma_i\rangle$ of $G_2$ is normal in $G_2$, and $G_2 = \langle \{ \langle \sigma_j\rangle : j\ge 1\} \rangle = \langle e,\sigma_1,\sigma_2,\cdots \rangle = \langle \sigma_1,\sigma_2,\cdots \rangle.$

The vectors $\frac{1}2(x+\phi(\sigma_i)(x))$ and $\frac{1}2(x-\phi(\sigma_i)(x))$ are eigenvectors of $\phi(\sigma_i)$ with corresponding eigenvalues +1 and $-1$. Since $x$ was arbitrary, this shows that every vector of $V$ is a linear combination of eigenvectors of $\phi(\sigma_i),$ so the set of eigenvectors of $\phi(\sigma_i)$ spans $V$. Hence we can obtain a basis of eigenvectors of $\phi(\sigma_i)$ for $V$ (e.g. we start with an eigenvector $v_1$ and then if $V\neq \mathrm{span} \{v_1\}$, we pick an eigenvector $v_2 \in V\neq \mathrm{span} \{v_1\}$ and continue until we get a basis). Since $V$ has an ordered basis consisting of eigenvectors of $\phi(\sigma_i)$, the matrix $\phi(\sigma_i)$ is diagonalizable.

$N$ is the number of matrices. The above shows that the space $V$ has a basis consisting of eigenvectors of $\phi(\sigma_i)$ for fixed $i$. Every element of $V$ can be written as a linear combination of two eigenvectors of $\phi(\sigma_{k+1})$ corresponding to the eigenvalues $+1$ and $-1$. Also, these eigenspaces are disjoint because if $0\neq w\in V_+\cap V_-,$ then $w$ is an eigenvector of $\phi(\sigma_{k+1})$ with eigenvalue $1$ and $-1$, which isn't possible. So $V= V_+\oplus V_-.$

The transformations $\phi(\sigma_j), 1\leq j\leq k$ commute with $\phi(\sigma_{k+1})$ because $\phi(\sigma_j)\phi(\sigma_{k+1}) = \phi(\sigma_j \sigma_{k+1}) = \phi(\sigma_{k+1}\sigma_j) = \phi(\sigma_{k+1})\phi(\sigma_j).$ Fix $1\leq j\leq k.$ Let $v_1$ and $v_2$ be the eigenvectors of $\phi(\sigma_{k+1})$ corresponding to eigenvalues $1$ and $-1$. Then $\phi(\sigma_j)(v_1) = \phi(\sigma_j)(\phi(\sigma_{k+1})(v_1)) = \phi(\sigma_{k+1})(\phi(\sigma_j)(v_1)),$ so $\phi(\sigma_j)(v_1)$ is an eigenvector of $\phi(\sigma_{k+1})$ with eigenvalue $1.$ Hence $\phi(\sigma_j)(V_+) \subseteq V_+.$ Similarly $\phi(\sigma_j)^{-1}(V_+)\subseteq V_+$ so $V_+ = \phi(\sigma_j) (V_+).$ Similarly $\phi(\sigma_j)(V_-) = V_-.$

So does the induction hypothesis apply for $N=k$?

An eigenvector $v$ is a shared eigenvector of all $\phi(\sigma_i)$ if it's an eigenvector of every $\phi(\sigma_i),$ right?

What does "they must all respect the decomposition (*)" mean precisely?

Hence if $v$ is a shared eigenvector of all $\phi(\sigma_j)$'s in the basis for $V_-,$ for any $1\leq j\leq k, \phi(\sigma_{k+1})(v) = \phi(\sigma_{k+1})(-\phi(\sigma_j)(v)) = -\phi(\sigma_j)(\phi(\sigma_{k+1})(v))$, so $\phi(\sigma_{k+1})(v)$ is an eigenvector of $\phi(\sigma_j)$ with eigenvalue -1. But is $v$ an eigenvector of $\phi(\sigma_{k+1})$ with eigenvalue $-1$?

How does the claim follow from the fact that $GL_n(F)$ contains only $2^n$ diagonal matrices with entries $\pm 1$?

The statement is clearly true, but each $\phi(\sigma_j)$ is diagonalizable, so there is an invertible matrix $P_j$ so that $P_j^{-1} \phi(\sigma_j) P_j$ is diagonal for all $j$.

Why can we adjoin a primitive third root of unity, $\omega$ to the field $F$? Does this just mean we replace $F$ with $F\cup \{\omega\}$?

2

There are 2 best solutions below

1
On BEST ANSWER

Fleshing out the idea from the comments. There is probably a simpler way.

Assume first that $\mathrm{char} F\neq2$.

Consider the 2-cycles $\sigma_i$ that interchange $2i$ and $2i-1$ and keep the rest of the integers fixed. The subgroup $G_2$ generated by the permutations $\sigma_1,\sigma_2,\ldots$, is abelian and an infinite direct sum of cyclic groups of order two. It suffices to show that it is impossible to embed $G_2$ into any $GL_n(F)$.

Assume contrariwise that an injective homomorphism $\phi:G_2\to GL(V)$, $V=F^n$, exists for some natural number $n$. Fix $i$ for a moment. For all $x\in V$ we have $$x=\frac12(x+\phi(\sigma_i)(x))+\frac12(x-\phi(\sigma_i)(x)).$$ Here the two terms are eigenvectors of $\phi(\sigma_i)$ belonging to eigenvalues $+1$ and $-1$ respectively. Therefore every vector of $V$ is a linear combination of eigenvectors of $\phi(\sigma_i)$. Hence the matrix of $\phi(\sigma_i)$ is diagonalizable.

Next I recap the argument proving the fact that $\phi(\sigma_i), i=1,2,\ldots,N$, are *simultaneously diagonalizable - no matter how large the natural number $N$ is. That is, the space $V$ has a basis $\mathcal{B}$ consisting of shared eigenvectors of all the linear transformations $\phi(\sigma_i)$. The argument goes by induction on $n$ and $N$. Above we settled the case $N=1$ for all $n$. Assume the claim settled for $N=k$. Consider $\phi(\sigma_{k+1})$. We already showed that $V$ is a direct sum $$V=V_+\oplus V_-\qquad(*)$$ of eigenspaces of $\phi(\sigma_{k+1})$ corresponding to the eigenvalues $+1$ and $-1$. The key observation is that as the transformations $\phi(\sigma_j), j=1,2,\ldots,k$, commute with $\phi(\sigma_{k+1})$ they must all respect the decomposition $(*)$.

Applying the induction hypothesis on $\dim V$ we see that both $V_+$ and $V_-$ have bases consisting of shared eigenvectors of all $\phi(\sigma_j), j=1,\ldots,k$, now also for $j=k+1$. This proves the induction step.

The claim then follows from the fact that $GL_n(F)$ contains only $2^n$ diagonal matrices with entries $\pm1$. So when $N>2^n$ we have a problem with injectivity.

Assume then that $\mathrm{char} F=2$.

We can adjoin a primitive third root of unity, $\omega$ to the field $F$ if it did not have one to begin with. We modiy the first case as follows. Let $\sigma_i$ be the $3$-cycle permuting the integers $3i-2\mapsto 3i-1\mapsto 3i\mapsto 3i-2$ and keeping the rest fixed.

The linear transformation $T=\phi(\sigma_i)$ is again diagonalizable because we have the decomposition $$ x=\frac13(x+T(x)+T^2(x))+\frac13(x+\omega^2T(x)+\omega T^2(x))+\frac13(x+\omega T(x)+\omega^2T^2(x))\qquad(**) $$ with the three terms being eigenvectors of $T$ belonging to eigenvalues $1$, $\omega$,$\omega^2$ respectively. Recall that $T^3=id_V$.

The rest of the argument is similar. There exists a basis of $V$ consisting of shared eigenvectors of all the transformations $\phi(\sigma_i), i=1,2,\ldots,N$. When $N$ exceeds $3^n$ this is a problem because the number of diagonal matrices with entries in $\{1,\omega,\omega^2\}$ is $3^n$.

0
On

Hint: The group $G$ has a lot of commuting elements, since any disjoint cycles commute. On the other hand, it's relatively difficult for matrices to commute. For instance, when diagonalizable matrices all commute with each other, they are simultaneously diagonalizable, which severely restricts their possible properties. See if you can use this to get a contradiction.

More details are hidden below.

Suppose you have an embedding $G\to GL_n(F)$. Extending $F$, we may assume it is algebraically closed, so in particular any element of $GL_n(F)$ of finite order that is not divisible by the characteristic of $F$ is diagonalizable. Pick some $m>1$ that is not divisible by the characteristic of $F$ and take an infinite family $(g_n)$ of pairwise disjoint $m$-cycles in $G$. These must map to an infinite family of distinct commuting elements of $GL_n(F)$ of order $m$. These elements are simultaneously diagonalizable and thus we may assume they are actually diagonal. But now $F$ has only finitely many $m$th roots of unity, so $GL_n(F)$ contains only finitely many diagonal elements of order $m$. This is a contradiction.