I'm looking for an efficient algorithm for generating idempotent matrices ( i.e. matrices $M$ such that $MM=M$) with dimensions $n \times n$ that belong to $\mathbb{Z}_q^{n \times n}$, with $q=2^{k}$. Generating square diagonal matrices with 0 and 1 entries or matrices with rows with entries 1 in all columns gives raise to idempotent matrices. But I would like to find an algorithm that doesn't only generate these two types of idempotent matrices.
2026-03-26 09:15:03.1774516503
On
Efficient Algorithm for generating idempotent matrices
630 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
2
There are 2 best solutions below
0
On
By $Z_q$ I presume you mean the integers mod $q$.
Let $U$ be a random $n \times n$ matrix with entries in $Z_q$. If $\det(U)$ is odd, and thus invertible in $Z_q$, then $U$ has an inverse $V$ over $Z_q$, which can be found using row reduction.
For any subset $S$ of $\{1,...,n\}$, let $U_S$ and $V_S$ denote the matrices formed from the columns of $U$ indexed by $S$ and the rows of $V$ indexed by $S$ respectively. Then you can take $M = U_S V_S$.
Here is some sample Maple code.
with(LinearAlgebra[Modular]):
q:= 2^9:
n:= 10:
U:= Random(q,n,n,integer[]):
for count from 1 while Determinant(q,U)::even do
U:= Random(q,n,n,integer[])
od:
V:= Inverse(q,U):
M:= Mod(q, U[..,1..5] . V[1..5,..], integer[]);
Since $\mathbb Z/2^k\mathbb Z$ is an elementary divisor ring, every idempotent matrix over it is diagonalisable. As $0$ and $1$ are the only roots of the equation $x^2=x$ in $\mathbb Z/2^k\mathbb Z$, we have $M=S(I_r\oplus0_{(n-r)\times(n-r)})S^{-1}$, where $S$ is some invertible matrix and $r\in\{0,1,\ldots,n\}$ is the rank of $M$.
By Gaussian elimination, every invertible matrix $S$ can be written as the product of at most $m\le\frac{n(n+1)}2-1$ elementary matrices of shear or transposition type and an invertible triangular matrix. Observe that if $T$ is an invertible triangular matrix, then $$ T(I_r\oplus0_{(n-r)\times(n-r)})T^{-1} =\pmatrix{I_r&W_{r\times(n-r)}\\ 0&0}\tag{1} $$ for some matrix $W_{r\times(n-r)}$. It follows that $$ M=E_1E_2\cdots E_m\pmatrix{I_r&W_{r\times(n-r)}\\ 0&0}E_m^{-1}\cdots E_2^{-1}E_1,\tag{2} $$ where $r$ is a random integer between $0$ and $n$, $W_{r\times(n-r)}$ is a random rectangular matrix, $m$ is a random integer between $0$ and $\frac{n(n+1)}2-1$ and each $E_j$ is an elementary matrix of either shear or transposition type.
In actual computation, we don't really multiply the elementary matrices together or calculate their inverses, but perform row/column operations instead. Also, to make the distribution more uniform, one should let $r$ follows a non-uniform (e.g. binomial) distribution. Alternatively, let $\mathcal I$ be a random subset of $\{1,2,\ldots,n\}$, $\mathcal J$ be its complement and $r=|\mathcal I|$. Then one may replace the block matrix in the middle of $(2)$ by another projection matrix $P$ such that $P(\mathcal I,\mathcal I)=I_r,\ P(\mathcal I,\mathcal J)=W_{r\times(n-r)}$ and $P(\mathcal J,:)=0$.
At any rate, the total computational complexity is $\sim n^3$ in the worst case and $\sim\frac13n^3$ on average. This is a bit more efficient (albeit more complicated) than the method proposed by Robert Israel, because we don't use rejection sampling and we don't calculate determinants or matrix inverses here. The following is the Octave/Matlab code for the generation: