I need to find a finite generating set for $Gl(n,\mathbb{Z})$. I heard somewhere once that this group is generated by the elementary matrices - of course, if I'm going to prove that $GL(n,\mathbb{Z})$ has a finite generating set, I would need to prove that any matrix $M\in GL(n,\mathbb{Z})$ can be generated by only finitely many of them.
At first, I didn't have a clue as to how to do this, so I did a bit of scouring the internet for any information that might be useful. There were a few proofs or hints at proofs, including here on MSE and also on MathOverflow, but they were either too advanced, didn't give enough details, assumed theory I can't assume at this point (for example about rings or principle ideal domains), or were extremely complicated (as in 4 pages with 4 lemmas that needed to be proven first - and this example didn't even prove exactly what the finite generator of $GL(n,\mathbb{Z})$ is).
This looks promising. In their notation, essentially, if $n$ is even, then $GL(n,\mathbb{Z})$ is generated by $s_{1}$ and $s_{3}$ And when $n$ is odd, $-s_{1}$ and $s_{3}$ generate $GL(n,\mathbb{Z})$, where $s_{1}=\begin{pmatrix} 0&0&0&\cdots &0&1\\ 1&0&0&\cdots & 0&0\\0&1&0&\cdots & 0 &0 \\ \vdots & \vdots & \vdots & & \vdots &\vdots \\ 0&0&0&\cdots & 0&0\\ 0&0&0&\cdots &1 & 0\end{pmatrix}$ and $s_{3}=\begin{pmatrix} 1&1&0&\cdots &0&0\\ 0&1&0&\cdots & 0&0\\0&0&1&\cdots & 0 &0 \\ \vdots & \vdots & \vdots & & \vdots &\vdots \\ 0&0&0&\cdots & 1&0\\ 0&0&0&\cdots &0& 1\end{pmatrix}$
How is a relatively simple way to prove this, that does not invoke rings or ideals at all (only group theory is permissible), and does not amake reference so to papers or $Hom(G,C_{p})$ (whatever that is)?
I'm guessing since the group operation in $GL(n,\mathbb{Z})$ is matrix multiplication, I'm guessing I'd have to show that any matrix $A$ can be generated by multiplying various combinations of $s_{1}$ and $s_{3}$ in the case when $n$ is even and various combinations of $-s_{1}$ and $s_{3}$ in the case when $n$ is odd. But what do those combinations look like when we're dealing with matrix multiplication? Do they include scalar multiples like integer linear combinations when the operation is addition? And how do we know what order to put them in, since matrix multiplication is not commutative?
Thank you.
The simple proof is: row reduction. Here's a brief outline, assuming some basic matrix algebra.
One learns in linear algebra or matrix algebra how to use row reduction to invert matrices over $\mathbb{R}$, to compute determinants, to solve systems of linear equations, etc. Essentially the row reduction algorithm from linear algebra tells us how to express an invertible real $n \times n$ matrix (an element of $\text{GL}(n,\mathbb{R})$) as a product of three kinds of matrices: (1) elementary matrices; (2) permutation matrices; and (3) diagonal matrices with nonzero elements on the diagonal. Doing a row operation corresponds to multiplying on the left by matrix of type (1); doing a row permutation corresponds to multiplying on the left by a matrix of type (2); and multiplying a row by a nonzero constant corresponds to multiplying on the left by a matrix of type (3).
For an element of $\text{GL}(n,\mathbb{Z})$ the same general method works, but there's some complications. First, the only diagonal matrices allowed are ones with $\pm 1$ on the diagonal. That makes it harder to get the numeral $\pm 1$ into a pivot position. But one can accomplish that using the Euclidean algorithm as a guide to producing an appropriate sequence of row reductions. The final result is that $\text{GL}(n,\mathbb{Z})$ is generated by: (1) elementary matrices with a $\pm 1$ entry off the diagonal; (2) permutation matrices; and (3) diagonal matrices with $\pm 1$ on the diagonal.
ADDED: Here's an example for the $2 \times 2$ matrix $\pmatrix{14 & -3 \\ 9 & -2}$.
I'll use row reduction notations like "$R_2 \to R_2 - R_1$" to represent "replace Row 2 by Row 2 minus Row 1". Here's a row reduction of the above matrix: $$\pmatrix{14 & -3 \\ 9 & -2} \xrightarrow{R_1 \to R_1 - R_2} \pmatrix{5 & -1 \\ 9 & -2} \xrightarrow{R_2 \to R_2 - R_1} \pmatrix{5 & -1 \\ 4 & -1} \xrightarrow{R_1 \to R_1 - R_2} \pmatrix{1 & 0 \\ 4 & -1} $$ $$\xrightarrow{R_2 \to R_2 - 4 R_1} \pmatrix{1 & 0 \\ 0 & -1} \xrightarrow{R_2 -> - R_2} \pmatrix{1 & 0 \\ 0 & 1} $$ If you wonder how I chose these row reductions, think of carrying out the Euclidean algorithm to compute the greatest common divisor of the entries in the first column.
Now, the instruction $$A \xrightarrow{R_i \to R_i - R_j} B $$ is equivalent to the matrix equation $E_{ij}^{-1} A=B$, equivalently $A = E_{ij}B$, where $E_{ij}$ is the elementary matrix with a $1$ in the $ij$ entry, and therefore $E_{ij}^{-1}$ is the elementary matrix with a $-1$ in the $ij$ entry. Also, the instruction $$A \xrightarrow{R_i \to - R_i} B $$ is equivalent to the matrix equation $D_i A = B$, equivalently $A=D_iB$, where $D_i$ is the diagonal matrix with 1's on the diagonal except for a $-1$ in the $(i,i)$ entry. In general examples you may also wish to use row swapping matrixes: the instruction to swap Row $i$ with Row $j$, written $$A \xrightarrow{R_i \leftrightarrow R_j} B $$ is equivalent to the matrix equation $S_{ij} A = B$, equivalently $A = S_{ij}B$, where $S_{ij}$ is the identity matrix with row $i$ and row $j$ swapped.
Thus we obtain a factorization $$\pmatrix{14 & -3 \\ 9 & -2} = E_{12} \, E_{21} \, E_{12} \, E_{21}^4 \, D_2 $$