Can octonions be represented by infinite matrices?

475 Views Asked by At

It is sometimes possible to multiply matrices of countably-infinite dimension. (Matrix multiplication is defined in the usual way, with rows and columns multiplied termwise and summed.) However, it turns out the associative property fails in general for infinite matrices, due to conditional convergence of infinite series.

Meanwhile, the octonions $\mathbb{O}$ are a unital nonassociative $8$-dimensional algebra that cannot be represented by $n\times n$ matrices (else they would associate). So it seems natural to ask, is is possible to represent $\mathbb{O}$ by infinite matrices?

I suppose one plan would be to take a finite-dimensional representation of the quaternions $\mathbb{H}$, "copy and paste" it into infinite matrices, and then find an infinite matrix for $\ell\in\mathbb{O}$ that squares to $-I$ and satisfies the rules of the Cayley-Dickson construction, but I don't see a way to do this.

(I suppose one could also generalize this question to arbitrary nonassociative algebras.)

1

There are 1 best solutions below

0
On BEST ANSWER

OK, since nobody else wants to post it, I'll do it as promised.

We want to design several (in the case of octonions $7$ or $8$, depending on whether you want to necessarily represent $e_0$ by the identity matrix or not) matrices $E_j$ ($j=1,\dots n$) with the multiplication table of the type $E_iE_j=\varepsilon_{ij}E_{k(i,j)}$ where $\varepsilon_{ij}$ is some real number ($\pm 1$ in the octonion multiplication table) and $k(i,j)$ is some index depending on $i,j$.

We shall start with choosing pairwise distinct real numbers $r_{j,k}, c_{j,k}\in(0,1/2)$, $j=1,\dots,n$, $k=1,2,\dots$, and consider the matrices $A_j=(r_{j,k}^\ell c_{j,\ell}^k)_{k,\ell}$ whose $k$-th row and $\ell$-th column are geometric progressions with ratios $r_{j,k}$ and $c_{j,\ell}$ respectively. Of course, they don't give us what we want, but we shall make just finitely many corrections in each row and column to satisfy the equations. Note that all convergencies in row times column multiplications will then be even absolute, though, of course, we'll not be able to do the double summation in the triple product.

We suppose that for some $N$ (initially $1$) the first $N-1$ rows/columns in each matrix are already chosen so that the desired multiplication table equations are satisfied for the $(N-1)\times(N-1)$ blocks, i.e., for the rows $R_{i,p}$ (that notation stands for the $p$-th row/column of the $i$-th matrix) and columns $C_{j,q}$, we have $R_{i,p}\cdot C_{j,q}=\varepsilon_{i,j}(E_{k(i,j)})_{p,q}$ for all $p,q\le N-1$. We now need to modify the $N-th$ row $R_i$ and column $C_i$ (I'll skip the index $N$) in each matrix so that they would satisfy the system $$ R_i\cdot C_{j,p}=\alpha_{i,j,p}: i,j=1,\dots,n,\ p\le N-1\,; \\ R_{j,p}\cdot C_{j}=\beta_{i,j,p}: i,j=1,\dots,n,\ p\le N-1\,; \\ R_i\cdot C_j=\gamma_{i,j}: i,j=1,\dots,n $$ where $\alpha_{i,j,p},\dots,\gamma_{i,j}$ are some prescribed real numbers. We shall do all modifications only beyond the $N$-th position, so the $N\times N$ block of each matrix is treated as known here.

To do it, choose disjoint finite subsets of integers $E$ and $E_{i,j}$ of cardinalities $|E|=n(N-1), |E_{i,j}|>2n(N-1)$ that lie so far away that the initial geometric progressions in rows $R_{i,p}$ and columns $C_{i,p}$, $i=1,\dots,n; p=1,\dots, N-1$ were not disturbed there during the previous steps. Now set all elements in $R_i$ and $C_i$ at the positions from $E\cup \bigcup_{i,j}E_{i,j}$ to $0$ and look at the equations. Most likely, all of them will be wrong. However, we can correct the first set (the one with $\alpha_{i,j,p}$) now by modifying each $R_i$ on $E$ appropriately (the corresponding linear systems will have Vandermond matrices, so they'll all be non-degenerate). Similarly, we can correct the second set (the one with $\beta_{i,j,p}$) by modifying $C_j$ in the positions from $E$.

Now we need to correct the last set of equations without spoiling the first two. To this end, we will change on each sets $E_{i,j}$ the entries of $R_i$ and $C_j$. For each such set we will find a non-zero vector $v_{i,j}$ such that it is orthogonal to all vectors determined by the positions from $E_{i,j}$ in the first $N-1$ rows and columns of all matrices (which is possible because we have $|E_{i,j}|>2n(N-1)$) and place this vector with some appropriate coefficients in the positions from $E_{i,j}$ into $R_i$ and $C_j$. This will correct the equation for $R_i\cdot C_j$ without affecting any other equation. After doing this for all $i,j$, we shall end with all equations satisfied, i.e., with matrices for which $N\times N$ blocks are good.