Group representation theory has applications in a lot of areas and problems in mathematics, one of such an example is the finest simultaneous block diagonalization of the set of matrices with common symmetries. According to Page 7 of this paper, let $\{T(g)\}$ be an orthogonal matrix representation of a group $G$, if a set of matrices $\{A_p\}$ all share the symmetry described by $G$, that is,
$T(g)^\intercal A_p T(g) = A_p, \ \forall g\in G \ \text{and} \ \forall p,$
then a simultaneous block diagonalization of $A_p$ can be obtained through the decomposition of the representation $\{T(g)\}$ into irreducible representations. In particular, if $\{P^\intercal T(g) P\}$ is the direct sum of irreducible representations with dimensions $n_i$ and multiplicities $m_i$, then each element in $\{P^\intercal A_p P\}$ has a common block structure as the direct sum of blocks of dimensions $m_i$ and multiplicities $n_i$.
Can anyone point me to references/theorems in group representation theory from which this claim is based? Thanks!
[I am sure that there is a slicker way of serving this up, and will happily delete this when it emerges. This is just the old-fashioned matrix way of looking at things.]
Suppose $T$ is a representation of the finite groups $G$ on the finite-dimensional space $V$. That is the same as saying that $V$ is a $\mathbb{C}G$-module. By Mashke's Theorem we can write $V=V_1\oplus\dots\oplus V_m$ where the $V_i$ are irreducibles. Let $\alpha:V\to V$ commute with all the $T(g)$.
Choose a basis of $V$ compatible with this decomposition. Then in terms of block matrices we have that $$ T(g)=\left[\begin{matrix} T_1(g) & 0 & \dots &0\\ 0 &T_2(g) &\dots& 0\\ 0 & 0 & \ddots & 0\\ 0 & 0& \dots &T_m(g)\\ \end{matrix} \right], \ \ \ A=\left[\begin{matrix} A_{11} & A_{12} & \dots &A_{1m}\\ A_{21} & A_{22} & \dots &A_{2m}\\ \vdots &\vdots& \ddots & \vdots\\ A_{m1} & A_{m2}& \dots &A_{mm}\\ \end{matrix} \right]. $$
The hypothesis of commutativity now gives $T_{r}(g)A_{rs}=A_{rs}T_{s}(g)$. As the $V_i$ are irreducible, Schur's Lemma yields that $A_{rs}=O$ whenever $V_r$ and $V_s$ are not isomorphic, and $A_{rs}=\lambda_{rs} I$ ($\lambda_{rs}\in\mathbb{C}$) when they are isomorphic.
If we now group the $V_i$ into "clumps" of isomorphic modules, we have that both $T(g)$ and $A$ are in diagonal block form; both havezeros where the row and column correspond to non-isomorphic modules.
That means that we need only consider the case when $V=V_1\oplus\dots\oplus V_m$ is the direct sum of isomorphic irreducibles. What we have proved in this case is that $A=\Lambda \otimes I_n$, where $n$ is the vector space dimension of $V_1$, and $\Lambda$ is the $m\times m$ matrix of the $\lambda_{rs}$.