I'm working on a problem that asks to characterize the irreducible complex representations of the group $G = \mathbb{Z}_{p} \times \mathbb{Z}_{p}$, where $p$ is prime. The one-dimensional representations are easy to characterize, and I want to claim that they are the only irreducible representations, i.e. that any representation of $G$ on $\mathbb{C}^{n}$ will be reducible if $n \geq 2$. I think I have a solution, and would appreciate anyone checking it because I've only been studying representation theory for about a week or two by myself, but I feel it may be a bit unwieldy, like there's a "better" solution, perhaps one that's less heavy on the matrices. Let $a , b$ generate $G$, so $G = \left< a , b \vert a^p = b^p = e, ab = ba \right>$, and let $\phi: G -\to \mathbb{GL}_{n}(\mathbb{C})$ be a representation, where $n \geq 2$, and let $A = \phi(a), B = \phi(b)$. Then we're looking to show all pairs of linear transformations $A, B \in \mathbb{GL}_{n}(\mathbb{C})$ such that $A^p = B^p = I, AB = BA$ induce reducible representations. So suppose we're handed $A , B \in \mathbb{GL}_{n}(\mathbb{C})$ such that $A^p = B^p = I, AB = BA$. We know both $A$ and $B$ are diagonalizable, so consider the eigenvalues $\alpha_1 , \ldots , \alpha_n$ (not necessarily distinct) of $A$. Now suppose that we've ordered our eigenvalues such that \begin{align*} (\alpha_i = \alpha_j ) \Rightarrow (\forall k \in \{i , i + 1 , \ldots , j - 1, j \}) (\alpha_k = \alpha_i) \end{align*} The idea here is that we wanna order the eigenvalues so equal eigenvalues are all together. So if $\alpha_i = \alpha_j$, then all the eigenvalues between $i$ and $j$ also match $\alpha_i = \alpha_j$. Now let $\{ e_1 , \ldots , e_n \}$ be a basis for $\mathbb{C}^n$ such that $A e_i = \alpha_i e_i$ for all $i$. I wanna show that if $A$ is non-scalar, then $B$ is "block-diagonal" with respect to this basis. So let $A = \operatorname{diag}(\alpha_1 , \ldots, \alpha_n ), B = [t_{i , j}]_{i , j = 1}^{n}$. Then \begin{align*} A B & = [\alpha_ i t_{i , j}]_{i , j = 1}^{n} , \\ B A & = [\alpha_ j t_{i , j}]_{i , j = 1}^{n} . \end{align*} But since $AB = BA$, this forces $\alpha_i t_{i, j} = \alpha_j t_{i, j}$ for all $i, j$. This means $\alpha_i \neq \alpha_j \Rightarrow t_{i, j} = 0$. This means $B$ has a block-diagonal structure, where the blocks correspond to the strings of $\alpha_i$ that match.
Now if $A$ has more than one distinct eigenvalue, then this block-diagonality is non-trivial. These blocks will correspond to the eigenspaces of $A$, which (because of the block-diagonality) will also be fixed by $B$.
If, on the other hand, all the eigenvalues of $A$ are identical, then $A$ is just a scalar transformation with scalar $\alpha = \alpha_1$, i.e. $A = \alpha I$. In this case, we can consider instead an eigenbasis $\{ f_1 , \ldots , f_n \}$ corresponding to $B$. In this case, $B f_i = \beta_i f_i, A f_i = \alpha f_i$, so $B(\mathbb{C} f_i) = \mathbb{C} f_i$, so the span of each $f_i$ is fixed by the representation, yielding a (complete) reduction of the representation.
My questions are twofold. The first is just whether my argument is correct, and I did in fact answer the question. The second is whether there's a better or "more correct" way to show that every representation of $\mathbb{Z}_p \times \mathbb{Z}_p$ in $\mathbb{C}^n (n \geq 2)$ is reducible. I know that mathematicians sometimes get antsy when bases are invoked that weren't there to start with. Further, the move I made of ordering the eigenvalues of $A$ seemed a little contrived and ugly to state in such a way that I'd rather not have to. Further, I know the statement reduces to "$B$ maps the eigenspaces of $A$ back to themselves", a less basis-centric statement, leading me to wonder if my argument could be phrased more in those terms.
Thank you for your help!
There is a standard theorem that all irreducible complex representations of finite abelian groups are one-dimensional. And the number of distinct such representations is equal to the order of the group.
This follows, among other ways, from standard theorems that the number of irreducible representations is equal to the number of conjugacy classes and that the sum of squares of the dimensions of the irreducible representations equals the order of the group.
Note that a more elementary argument simply uses Schur's Lemma. Over any algebraically closed field, the set of linear transformations commuting with every element of an irreducible representation are just the scalar matrices. Since $G$ is abelian all its representing matrices commute with each other and thus must all be scalar matrices. Clearly this means that the representation fails to be irreducible unless the dimension is 1.
You can find an elementary and brief proof of Schur's Lemma on Wikipedia.