General Linear Group: Thinking of invertible matrices as bijective functions?

782 Views Asked by At

In my lecture notes I encountered the following result,

For any set $S$ the set $F$ of bijective functions $f:S\to S$ is a group under composition, but is not in general abelian.

Then it was mentioned that:

The set of invertible $n\times n$ matrices over field $\mathbb F$ is a group under matrix multiplication: it is a special case of the above result with $S=\mathbb F^n$ and is non-abelian if $n>1$. It is called the general linear group, $\text{GL}(n,\mathbb F)$.

I am having trouble seeing the connection between the two. How can you think of a matrix as a bijective function when it doesn't have an input? Am I missing something?

Thanks so much in advance :)

4

There are 4 best solutions below

2
On BEST ANSWER

One important viewpoint is that the whole reason matrices exist is to represent linear transformations. When you think of an $m \times n$ matrix $A$ with entries in a field $F$, you should usually think about the mapping $L_A:F^n \to F^m$ defined by $L_A(x) = Ax$.

3
On

In the case of $n\times n$-matrices $A$ over ${\Bbb F}$, one considers the mappings $f_A:{\Bbb F}^n\rightarrow{\Bbb F}^n:x\mapsto Ax$ (matrix-vector multiplication). The mapping $f_A$ is bijective iff the matrix $A$ is invertible.

4
On

Given any two $\mathbf F$-vector spaces $U$ and $V$, of dimensions $m$ and $n$ respectively, a linear map $f:U\longrightarrow F$ is entirely determined by the images of the vectors $e_i\;(1\le i\le m)$ in a basis $\mathcal B$ of $U$.

These images have $n$ coordinates in a basis $\mathcal B'$ of $F$. Thus one can associate to the linear map $f:(U,\mathcal B)\longrightarrow(V,\mathcal B')$ the $n{\times}m\;$ matrix $A=(a_{i,j})_{\substack{1\le i\le n\\[0.5ex]1\le j\le m}}$, where the $j$-th column is made up from the $n$ coordinates of $f(e_j)$.

In particular, if $U$ and $V$ have the same dimension $n$, we obtain a square matrix (which depends on the chosen basis). It can be shown that $f$ is an isomorphism if and only if this matrix is invertible, i.e. if and only if its determinant is non-zero.

2
On

The foundation of linear algebra is the matrix representation theorem:

If $U,V$ are finite dimensional vector spaces with bases $(u_1,\ldots,u_n)$ and $(v_1,\ldots, v_m)$, then any linear function $f\colon U\to V$ satisfies:

$$\begin{aligned} f(u) &= f(\sum_{i=1}^n \alpha_j u_j) &\text{for some $\alpha_i$ since $(u_j)$ are a basis} \\ &= \sum_{j=1}^n \alpha_j f(u_j) &\text{since f is linear} \\ &= \sum_{j=1}^n \alpha_j \sum_{i=1}^m \beta_{ij} v_i &\text{for some $\beta_{ij}$ since $(v_i)$ are a basis} \\ &= \sum_{i=1}^{m} \big(\sum_{j=1}^{n}\beta_{ij}\alpha_j\big)v_i &\text{rearranging} \end{aligned}$$

So for any vector $u\in U$ with coordinates $a = (\alpha_1,\ldots,\alpha_n)$, with respect to the basis $(u_j)$, the coordinates of $f(u)\in V$, with respect to the basis $(v_i)$ are $$c = \big(\sum_{i=1}^{n}\beta_{1j}\alpha_j, \sum_{i=1}^{n}\beta_{2j}\alpha_j, \ldots, \sum_{i=1}^{n}\beta_{mj}\alpha_j\Big) = B\cdot a$$

So, when we fix our coordinate system to $(u_j)$ in $U$ and $(v_i)$ in $V$, we can equivalently represent the function $f$ as $$a\mapsto B\cdot a$$

The $B$ matrix contains at entry $ij$ the $i$-coordinate of $f(u_j)$, i.e. it encodes how the basis vectors of $U$ are mapped into $V$ under $f$.

This is the main reason matrices where introduced in the first place: as a useful notational convention to denote linear maps between finite dimensional vector spaces. So a matrix $A$ "is" the linear map $x\mapsto Ax$