how should i describe this combination of group actions?

118 Views Asked by At

let $A$ be a multiplicative abelian group and let $D_m=D_m(A)$ for integer $m \gt 0$ be the group of $m \times m$ diagonal matrices with entries in $A$. now $D_m$ has a subgroup $A^* \cong A$ which is formed by the "scalar" matrices in which every diagonal component is the same element of A. the factor group $E_m=\frac{D_m}{A^*}$ (which we may call the co-scalar matrices) is the subgroup comprising those diagonal matrices for which the product of all the entries is $1_A$.

I want to describe the following situation. $T_{mn}$ is the set of $m \times n$ matrices with elements in $A$.

$T_{mn}$ is acted on by $D_m$ and $D_n$ by multiplication on the right and left. thus $D_m$ acts on rows and $D_n$ acts on columns.

but what really happens is that $T_{mn}$ is acted on by $E_m$ and by $E_n$ (on rows and columns, respectively), and also by a scalar group isomorphic to $A$ which multiplies all elements of $T_{mn}$ by the same element of $A$

QUESTION what is the appropriate description for this? it seems to have something in common with what happens in a tensor product $X \otimes_R Y$ where we have $x \otimes ry = xr \otimes y$, but I have as yet only a rather vague notion of what a tensor product is, and I have not seen the notation used with groups.

I would appreciate any help in gaining a clearer idea of how this really rather simple situation should be properly described.

1

There are 1 best solutions below

6
On

Central product

You are correct that you have a tensor product in the matrix sense. The more usual term here is “central product”. You want to mod out by $A^*$, but the $E_m$ way is not quite right.

You have an action of $D_m \times D_n$ where the element $(R,C)$ acts on the matrix $M$ by $RMC$ (where I ignore left/right corrections since $D_m$ and $D_n$ are abelian).

However, this action has a kernel, $K=\left\{ \left( D_m(a), D_n(a^{-1}) \right) : a \in A \right\} \cong A^*$ and you do want to mod out by it. So you want the group $D_m \mathsf{Y} D_n = (D_m \times D_n)/K$.

In GAP, you can use CentralProductOfMatrixGroups to form the central product where the amalgamated (squished) part is the scalar matrices. This is used all the time. The function even returns a matrix group, like you'd like. How does it do it?

Keeping it matrixy

So you probably liked the $E_m$ idea because it maintained your quotient group as a subgroup (not all quotient groups are like this, but this one is). However, you ran into trouble because $E_m \times E_n$ wasn't quite right, and $E_m \times D_n$ probably disturbed your sense of symmetry. Luckily matrix algebra has an old solution for this.

Note that each of these actions of $D_n$ and $D_m$ is linear, so we view the set of all $M$ as a vector space with a basis (the standard basis of nearly 0 matrices $E_{ij}$, in a particular order, probably lexicographic). In this basis the action of $\overline{(R,C)} \in D_m \mathsf{Y} D_n$ is given by a matrix, namely KroneckerProduct(R,C) in GAP, and the Kronecker product $R \otimes C$ in mathy land.

Kronecker products are pretty simple to understand: one of the matrices is the "pattern" and the other is the "fill". The pattern matrix decides which multiple of the fill matrix to use. So $\begin{bmatrix}1&2\\3&4\\5&6\end{bmatrix} \otimes A$ is the block matrix $\begin{bmatrix} A & 2A \\ 3A & 4A \\ 5A & 6A \end{bmatrix}$.

Application

I don't believe central products or Kronecker products help very much in the application (except that Kronecker products made my code quite a bit shorter). However, I do include one simplification in the below discussion that made it much easier to recognize the “unique” solution, so that the other solutions stand out clearly.

Given an $m \times n$ matrix $X$, an $m \times 1$ matrix $P$, and a $1 \times n$ matrix $Q$, find diagonal matrices $L_{m \times m}$ and $R_{n \times n}$ such that $Q=J_{1\times m} LXR$ and $LXRJ_{n\times 1}=P$, where $J_{a\times b}$ is an $a \times b$ matrix consisting of $1$s. The goal is to show that $L \otimes R$ is uniquely determined if it exists.

If $(L,R)$ and $(L',R')$ are two solutions, set $Y=LXR$. Then $P=JY$ and $YJ=Q$ and $L'XR' = L''YR''$ where $L'' = L' L^{-1}$ and $R''=R^{-1} R'$, so we might as well start with $Y$ for the uniqueness proof. This means the only solution we want is $L \otimes R = 1$ which is a little easier to detect.

The collection $C$ of all $m \times n$ matrices $Y$ such that $P=JY$ and $YJ=Q$ is somewhat large (see A matrix with given row and column sums with a hidden kronecker product in it). If $Y,Z \in C$ then $J(Y-Z)=P-P=0$ and $(Y-Z)J=Q-Q=0$, so $C$ is a coset of the space $V$ of all matrices with 0 row and column sums.

The application (in group action language) is to show that the action of $D_m \mathsf{Y} D_n = \{ L \otimes R \}$ has a small setwise stabilizer: if $LXR \in C$ and $L'XR' \in C$, then $L \otimes R = L' \otimes R'$. I don't think this is too useful, other than it doesn't seem incredibly likely to be true.

The algebra for $LXR$ is a bit of a hassle, so we can replace this with Kronecker products. The vec operation (Flat in GAP) lays out a matrix as a vector. It has the defining property that $vec(LXR) = (L\otimes R)\cdot vec(X)$. So we are trying to solve $((L\otimes R)-1)\cdot vec(Y) = vec(Z)$ for diagonal matrices $L,R$ and a zero row/col sum matrix $Z \in V$. If the only solution is $L \otimes R =1$ and $Z=0$, then this is an affirmative answer; any other answer is an explicit counterexample (though the application required $L$ and $R$ to be positive).

I don't see a great way to solve this. For a specific $Y$, it is a bunch of multivariate quadratic equations which can be handled using the theory of equations. These are the same equations you started with, just slightly cleaned up using $Y$ instead of $X$. I've done a few examples, and there are plenty of counterexamples if you allow $L$ or $R$ to have negative entries. I haven't found any counterexample with both $L$ and $R$ positive.