4 dimensional numbers

1k Views Asked by At

I've tought using split complex and complex numbers toghether for building a 3 dimensional space (related to my previous question). I then found out using both together, we can have trouble on the product $ij$. So by adding another dimension, I've defined $$k=\begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix}$$ with the property $k^2=1$. So numbers of the form $a+bi+cj+dk$ where ${{a,b,c,d}} \in \Bbb R^4$, $i$ is the imaginary unit, $j$ is the elementry unit of split complex numbers and k the number defined above, could be represented on a 4 dimensinal space. I know that these numbers look like the Quaternions. They are not! So far, I came out with the multiplication table below : $$\begin{array}{|l |l l l|}\hline & i&j&k \\ \hline i&-1&k&j \\ j& -k&1&i \\ k& -j&-i&1 \\ \hline \end{array}$$

We can note that commutativity no longer exists with these numbers like the Quaternions. When I showed this work to my math teacher he said basicaly these :

  1. It's not coherent using numbers with different properties as basic element, since $i^2=-1$ whereas $j^2=k^2=1$
  2. 2x2 matrices doesn't represent anything on a 4 dimensional space

Can somebody explains these 2 things to me. What's incoherent here?

5

There are 5 best solutions below

11
On BEST ANSWER

Congratulations: the multiplication table for basis elements that you have laid out indicate that you have independently discovered the Clifford algebra of a two dimensional vector space with metric signature $(1,-1)$, also denoted as $C\ell_{1,1}(\Bbb R)$!

This algebra is isomorphic to the full ring of $2\times 2 $ real matrices $M_2(\Bbb R)$ as an algebra. So, it is completely coherent.

The quatnerions, split complex numbers, and this structure you are describing are united by the Clifford algebra perspective:

$$ \begin{bmatrix}C\ell_{0,0}(\Bbb R)&&|&&\Bbb R\\ C\ell_{0,1}(\Bbb R)&&|&&\Bbb C\\ C\ell_{0,2}(\Bbb R)&&|&&\Bbb H\\ C\ell_{1,0}(\Bbb R)&&|&& \text{split complex numbers}\cong \Bbb R\times\Bbb R\\ C\ell_{1,1}(\Bbb R)&&|&& \text{your algebra}\cong M_2(\Bbb R)\end{bmatrix} $$

If you find all this incomprehensible at the moment, then I totally understand. I only started learning about Clifford algebras about a year ago. I don't even know if you have any abstract algebra training, either.

I just want to reassure you that what you described here is perfectly sensible thing in ring theory. It looks like your teacher dismissed it, but that may be understandable: teachers often see a lot of ideas by students that do fall flat!

At any rate, the two objections you included in the OP are quite vague.


To find an explicit isomorphism with $M_2(\Bbb R)$, you can use this mapping: $$ 1\mapsto \begin{bmatrix}1&0\\0&1\end{bmatrix}\ \ i\mapsto \begin{bmatrix}0&-1\\1&0\end{bmatrix}\\ j\mapsto\begin{bmatrix}0&1\\1&0 \end{bmatrix}\ \ k=ij\mapsto \begin{bmatrix}-1&0\\0&1\end{bmatrix}\ \ $$

These four matrices clearly are a basis of $M_2(\Bbb R)$ and fit your table.

3
On

You can build numbers generated by such $\mathbf{i},\mathbf{j},\mathbf{k}$, I see no incoherency. They will form a subalgebra of complex $2\times2$ matrices, in which the role of $\mathbf{i},\mathbf{j},\mathbf{k}$ will be played by $$ \mathbf{i}=\left(\begin{array}{cc} 0 & i \\ i & 0\end{array}\right),\qquad \mathbf{j}=\left(\begin{array}{cc} 0 & -i \\ i & 0\end{array}\right), \qquad \mathbf{k}=\left(\begin{array}{cc} 1 & 0 \\ 0 & -1\end{array}\right).$$ This is a four-dimensional subspace of the eight-dimensional (over $\mathbb{R}$) space of all $2\times2$ complex matrices.

3
On

rschwieb already gave you the high powered answer. Here let me give you the low-powered version of what he wrote.

Consider the collection of $2\times 2$ matrices with real entries. We can write each matrix as $$ \begin{pmatrix} A & B \\ C & D \end{pmatrix} $$ and if we re-organize the presentation, it can be identified with an element of $\mathbb{R}^4$ $$ \begin{pmatrix} A \\ B \\ C \\ D\end{pmatrix} $$ By writing it as a matrix, you allow yourself to do "multiplication" by matrix multiplication.

Now, we can write $$ \begin{pmatrix} A & B \\ C & D \end{pmatrix} = A \begin{pmatrix} 1 & 0 \\ 0 & 0\end{pmatrix} + B \begin{pmatrix} 0 & 1 \\ 0 & 0\end{pmatrix} + C \begin{pmatrix} 0 & 0 \\ 1 & 0\end{pmatrix} + D \begin{pmatrix} 0 & 0 \\ 0 & 1\end{pmatrix} $$ which, if you know a bit of linear algebra, is just expressing a $2\times 2$ matrix in a basis.

As it turns out, what you've done is basically just choosing a different basis for the $2\times 2$ matrices. You chose

$$ \mathbf{1} = \begin{pmatrix} 1 & 0 \\ 0 & 1\end{pmatrix} \quad \mathbf{i} = \begin{pmatrix} 0 & -1 \\ 1 & 0\end{pmatrix} $$ and $$ \mathbf{j} = \begin{pmatrix} 0 & -1 \\ -1 & 0\end{pmatrix} \quad \mathbf{k} = \begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix} $$

We can solve for the "standard" basis $\begin{pmatrix} 1 & 0 \\ 0 & 0\end{pmatrix}$ etc. in terms of this new basis. Plugging it back in to the expression then we have

$$ \begin{pmatrix} A & B \\ C & D\end{pmatrix} = \frac{A}{2} (\mathbf{1} + \mathbf{k}) + \frac{B}{2} (-\mathbf{i} - \mathbf{j}) + \frac{C}{2} (\mathbf{i} - \mathbf{j}) + \frac{D}{2} (\mathbf{1} - \mathbf{k}) $$

This identification can be reversed (exercise for you!). But in any case your identification of $a\mathbf{1} + b\mathbf{i} + c\mathbf{j} + d\mathbf{k}$ with the $\mathbb{R}^4$ vector $(a,b,c,d)$ corresponds then, to identifying the matrix $\begin{pmatrix} A & B \\ C & D\end{pmatrix}$ with the element $$\begin{pmatrix} \frac12 (A + D) \\ \frac12 (C - B) \\ -\frac12 (B+C) \\ \frac12 (A-D) \end{pmatrix}$$ which can be realized as the linear transformation of $\mathbb{R}^4$ that can be realized by a matrix multiplication $$ \begin{pmatrix} A \\ B \\ C \\ D\end{pmatrix} \mapsto \begin{pmatrix} \tfrac12 & 0 & 0 &\tfrac12 \\ 0 & -\tfrac12 & \tfrac12 & 0 \\ 0 & -\tfrac12 & \tfrac12 & 0 \\ \tfrac12 & 0 & 0 & -\tfrac12 \end{pmatrix}\begin{pmatrix} A \\ B \\ C \\ D\end{pmatrix}$$


What is the lesson behind all this? Given any four real numbers, you can of course identify them with an element of $\mathbb{R}^4$. The real question starts when you ask "how is this identification meaningful"? The first thing you can do is to try a little bit of linear algebra like I outlined above. But things get real exciting when you start connecting the algebra to geometry, and that's where the power of the Clifford Algebra that rschwieb mentioned really shines.

For the time being, if you cannot completely absorb the abstract nonsense in the definitions of Clifford algebras, it may be worthwhile to set your goal a tiny bit lower and think only about geometric algebra. (Unfortunately the Wikipedia link is not the best way to learn about this, read this first, and if you are interested, perhaps follow a textbook such as this.)

0
On

You have discovered split-quaternions. You can compare the multiplication table there and in your question.

This algebra is not commutative and has zero divisors. So, it combines the "negative" traits of both quaternions and tessarines. On the other hand it is notable to be isimorphic to the $2\times2$ matrices. Due to this isomorphism, people usually speak about matrices rather than split-quaternions.

0
On

Here is the Mathematica code for your proposed system, split-quaternions, using $2\times2$ real matrices $i=\left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \\ \end{array} \right),j=\left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \\ \end{array} \right), k=\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \\ \end{array} \right) $:

Unprotect[Dot];
Dot[x_?NumberQ, y_] := x y;
Protect[Dot];
Matrix /: Matrix[x_?MatrixQ] := 
  First[First[x]] /; x == First[First[x]] IdentityMatrix[Length[x]];
Matrix /: NonCommutativeMultiply[Matrix[x_?MatrixQ], y_] := 
  Dot[Matrix[x], y];
Matrix /: NonCommutativeMultiply[Matrix[y_, x_?MatrixQ]] := 
  Dot[y, Matrix[x]];
Matrix /: Dot[Matrix[x_], Matrix[y_]] := Matrix[x . y];
Matrix /: Matrix[x_] + Matrix[y_] := Matrix[x + y];
Matrix /: x_?NumericQ + Matrix[y_] := 
  Matrix[x IdentityMatrix[Length[y]] + y];
Matrix /: x_?NumericQ  Matrix[y_] := Matrix[x y];
Matrix /: Matrix[x_]*Matrix[y_] := Matrix[x . y] /; x . y == y . x;
Matrix /: Re[Matrix[x_?MatrixQ]] := Tr[x]/Length[x];
Matrix /: Conjugate[Matrix[x_?MatrixQ]] := Matrix[-x] + 2 Re[Matrix[x]]
Matrix /: Power[Matrix[x_ ?MatrixQ], y_] := 
  Matrix[MatrixPower[x, y]];

$Post = FullSimplify[# /. i -> Matrix[( {
                {0, 1},
                {-1, 0}
               } )] /. j -> Matrix[( {
               {0, 1},
               {1, 0}
              } )] /. k -> Matrix[( {
              {1, 0},
              {0, -1}
             } ) ] /. 
        f_[args1___, Matrix[mat_], args2___] :> 
         Matrix[MatrixFunction[f[args1, #, args2] &, mat]] /. 
       Matrix[{{a_, c_}, {d_, b_}}] :> (a + b)/
          2 + (c - d)/2 i + (c + d)/2 j + (a - b)/
           2 k ] &;

Test:

In=(i + k)^2

Out=0

In=Log[2 i + 3 j + 4]

Out=1/10 (2 Sqrt[5] (2 i + 3 j) ArcCoth[4/Sqrt[5]] + Log[161051])

In=(2 i + 3 j + 4) ** (5 - k)

Out=20 + 13 i + 17 j - 4 k