More on Matrices Representing Complex Numbers

155 Views Asked by At

This question is in a sense a follow-up and extension of this one, which essentially asks for representations of complex numbers $a + bi \in \Bbb C$ as $2 \times 2$ real matrices $A$ such that

$AJ = JA, \tag 1$

with

$J = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}; \tag 2$

we observe that

$J^2 = - I. \tag 3$

The answer I gave there showed that all such matrices are given by

$A \in aI + bJ \in M_{2 \times 2}(\Bbb R); \tag 4$

here we clearly have

$a, b \in \Bbb R, \tag 5$

and the correspondence 'twixt complex numbers and such matrices is given by

$\Bbb C \ni a + bi \longleftrightarrow aI + bJ \in M_{2 \times 2}(\Bbb R). \tag 6$

Seeking a generalization of these results, I here request that

1.) All real $2 \times 2$ matrices satisfying (3) be found;

2.) For any such $J$ as in request (1.), all real matrices $A$ such that (1) binds be found;

3.) For any such $J$, $A$ as in request (2.),

$A = aI + bJ, \tag 7$

for some $a, b \in \Bbb R$ be shown;

4.) The mapping

$a + bi \longleftrightarrow aI + bJ \tag 8$

is an isomorphism 'twixt the complex numbers $\Bbb C$ and the set of $2 \times 2$ real matrices of the form $aI + bJ$.

4

There are 4 best solutions below

0
On

Let us define $Z=\begin{bmatrix}0& -1\\ 1&0\end{bmatrix}$ for simplicity.

[1] Directly, any matrix of the following form for any real $a,k$: $$J=\begin{bmatrix}a& -(a^2+1)/k\\ k&-a\end{bmatrix}$$ For completeness, consider a matrix $$J=\begin{bmatrix}a&b\\c&d\end{bmatrix}$$ such that $J^2=-I$. We can directly solve this: $a^2+bc =-1;\ d^2+bc = -1;\ b(a+d)=0;\ c(a+d)=0$. If $a+d$ is non-zero, then $b=c=0$ by the last two equations, and no solution exists for $a^2=-1$. Otherwise, we have $d=-a$, and can parametrize the solutions as done above.

More elegantly, for any invertible $X$, $$J=\pm X^{-1}Z X$$ fits in (3); for positive $k$, we can just plug in $$X=\begin{bmatrix}a/\sqrt{k}& 1/\sqrt{k}\\ \sqrt{k}&0\end{bmatrix}$$ to recover the above characterization, so this covers all possible $J$

[2] (Using conjugacy classes, this is essentially the same as the case with $J=Z$ itself.)

More precisely, if $A$ commutes with $J$, let $B=XAX^{-1}$ (i.e. $A=X^{-1}BX$), then $$AX^{-1}Z X = X^{-1} B Z X$$ $$X^{-1}Z XA = X^{-1} Z B X$$ And these two are only equal when $B$ commutes with $Z$. Therefore, relying on your previous results, the commuting matrices are $$A=X^{-1}(aI+bZ)X$$

[3] Directly follows from the above.

[4] This should be simple to check, given the above properties. $I$ and $J$ are always linearly independent because the trace of one of them is zero and the other is not.

4
On

Not exactly an answer, I have added an additional condition. Thanks to @ancientmathematician for catching my mistake.

Suppose we have a continuous injective ring homomorphism $\phi: \mathbb{C} \to \mathbb{R}^{2 \times 2}$.

Let $J= \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}$.

Then I claim that $\phi(a+ib) = aI + b W J W^{-1}$ for some invertible $W$.

We must have $\phi(1) = I$, and so $\phi(q) = qI$ for $q \in \mathbb{Q}$.

Let $B=\phi(i)$. Similarly we have $\phi(qi) = qB$.

We need to determine the allowable values of $B$.

We have $\phi(i^2) = \phi(-1) = \phi(i)^2$ so $B^2+I = 0$ and $B$ is real and hence has distinct eigenvalues $\pm i$. Hence for some real $u,v \in \mathbb{R}^2$ we have $B(u+iv) = i(u+iv) = -v+iu$ and so $Bu = -v, Bv = u$. (It is straightforward to show that $u,v$ are linearly independent.) If we let $W = \begin{bmatrix} u & v\end{bmatrix}$ then $B W = W J$, or $B = W J W^{-1}$.

Since $\mathbb{Q}[i]$ is dense in $\mathbb{C}$, it follows by continuity that $\phi(a+ib) = aI + b W J W^{-1}$ for any $a,b \in \mathbb{R}$.

It is straightforward to verify that $\phi(a+ib) = aI + b W J W^{-1}$ for any $a,b \in \mathbb{R}$ defines a continuous injective homomorphism.

0
On

I'll show $(3)$ directly. Let's think of $A$ and $J$ as complex matrices (that is, as matrices in $M_2(\mathbb{C})$). Since $J^2 = -1$, it has $\pm i$ as eigenvalues and is diagonalizable (as a complex matrix). Denote by $v_1,v_2 \in \mathbb{C}^2$ the corresponding eigenvectors so that $Jv_1 = iv_1, Jv_2 = -iv_2$. Since $AJ = JA$, it is readily seen that $v_1,v_2$ are also eigenvectors of $A$. Now, since $A$ is real, we have two possible cases:

  1. $A$ has distinct real eigenvalues. But then $A$ is diagonalizable over $\mathbb{R}$ and any real eigenvector of $A$ will also be an eigenvector of $J$ which is impossible.
  2. $A$ has eigenvalues $a \pm ib$ with $a,b \in \mathbb{R}$. Then $Av_1 = (a + ib)v_1$ and $Av_2 = (a - ib)v_2$. This immediately implies that $A = aI + bJ$ (apply both sides to $v_1,v_2$).
0
On

Set

$J = \begin{bmatrix} j_{11} & j_{12} \\ j_{21} & j_{22} \end{bmatrix}; \tag 1$

then

$J^2 = \begin{bmatrix} j_{11} & j_{12} \\ j_{21} & j_{22} \end{bmatrix}\begin{bmatrix} j_{11} & j_{12} \\ j_{21} & j_{22} \end{bmatrix} = \begin{bmatrix} j_{11}^2 + j_{12}j_{21} & j_{12}(j_{11} + j_{22}) \\ j_{21} (j_{11} + j_{22}) &j_{22}^2 + j_{12}j_{21} \end{bmatrix}; \tag 2$

we also have

$J^2 = -I = \begin{bmatrix} -1 & 0 \\ 0 & -1 \end{bmatrix}, \tag 3$

and comparing (2) and (3) we find

$j_{11}^2 + j_{12}j_{21} = -1, \tag 4$

$j_{22}^2 + j_{12}j_{21} = -1, \tag 5$

$j_{12}(j_{11} + j_{22}) = 0, \tag 6$

$j_{21}(j_{11} + j_{22}) = 0; \tag 7$

now if

$j_{11} + j_{22} \ne 0, \tag 8$

then (6) and (7) yield

$j_{12} = j_{21} = 0, \tag 9$

and hence (4) and (5) become

$j_{11}^2 = j_{22}^2 = -1, \tag{10}$

which is impossible for real $j_{11}$, $j_{22}$; thus (8) is false and so we may choose $\alpha \in \Bbb R$ with

$j_{11} = \alpha = -j_{22}; \tag{11}$

now both (4) and (5) yield

$j_{12}j_{21} = -1 - \alpha^2 = -(1 + \alpha^2); \tag{12}$

we choose

$j_{12} = \beta \ne 0, \tag{13}$

and have

$j_{21} = -\dfrac{1 + \alpha^2}{\beta}, \tag{14}$

and we may write $J$ as

$J = \begin{bmatrix} \alpha & \beta \\ -\dfrac{1 + \alpha^2}{\beta} & -\alpha \end{bmatrix}, \tag{15}$

which gives every matrix $J$ satisfying (3). Note that

$\alpha, \beta \in \Bbb R, \; \beta \ne 0; \tag{16}$

with

$\alpha = 0, \; \beta = 1, \tag{17}$

we obtain

$J = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} \tag{18}$

as in this question. We have thus answered item (1.).

For item (2.), we set

$A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} \tag{19}$

and write

$AJ = JA \tag{20}$

as

$\begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} \alpha & \beta \\ -\dfrac{1 + \alpha^2}{\beta} & -\alpha \end{bmatrix} = \begin{bmatrix} \alpha & \beta \\ -\dfrac{1 + \alpha^2}{\beta} & -\alpha \end{bmatrix}\begin{bmatrix} a & b \\ c & d \end{bmatrix}; \tag{21}$

from this equation we obtain

$a\alpha -b\dfrac{1 + \alpha^2}{\beta} = \alpha a + \beta c, \tag{22}$

$a\beta - \alpha b = \alpha b + \beta d, \tag{23}$

$c\alpha - d\dfrac{1 + \alpha^2}{\beta} = -a\dfrac{1 + \alpha^2}{\beta} - \alpha c, \tag{24}$

$c\beta - \alpha d = -b\dfrac{1 + \alpha^2}{\beta} - \alpha d; \tag{25}$

from (22) and (25),

$-b\dfrac{1 + \alpha^2}{\beta} = \beta c, \tag{26}$

$c\beta = -b\dfrac{1 + \alpha^2}{\beta}; \tag{27}$

since these equations are essentially the same, we only retain one of them to give information about the structure of the matrix $A$; a minor rearrangement of either yields

$c = -\dfrac{1 + \alpha^2}{\beta^2}b, \tag{28}$

which allows us to think of $b$ as a free parameter in $A$, and let $c$ be fixed by this equation; we further observe that (23) may be written

$\beta(a - d) = 2\alpha b, \tag{29}$

or

$a - d = 2\dfrac{\alpha}{\beta} b, \tag{30}$

or

$d = a - 2\dfrac{\alpha}{\beta} b; \tag{31}$

(28)-(30) show that given $a$ and $b$, $c$ and $d$ are determined from the relation (20); note that taking $\alpha$, $\beta$ as in (17), $J$ takes the form (18), and

$c = -b, \; d = a, \tag{32}$

exactly as in my answer to Why matrices commuting with $\begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}$ represent complex numbers?; in the present more general case we see that

$A = \begin{bmatrix} a & b \\ -b \dfrac{1 + \alpha^2}{\beta^2} & a - 2\dfrac{\alpha}{\beta} b \end{bmatrix} \tag{33}$

presents every matrix $A$ such that $AJ = JA$ for the given $J$; thus is item (2) resolved.

As for item (3) we write

$\begin{bmatrix} a & b \\ -b \dfrac{1 + \alpha^2}{\beta^2} & a - 2\dfrac{\alpha}{\beta} b \end{bmatrix} = \gamma I + \delta \begin{bmatrix} \alpha & \beta \\ -\dfrac{1 + \alpha^2}{\beta} & -\alpha \end{bmatrix}, \tag{34}$

then

$a = \gamma + \delta \alpha \tag{35}$

and

$a - 2\dfrac{\alpha}{\beta} b = \gamma - \delta \alpha; \tag{36}$

subtracting,

$2\dfrac{\alpha}{\beta} b = 2\delta \alpha, \tag{37}$

whence

$\delta = \dfrac{b}{\beta}, \tag{38}$

and thus

$\gamma = a - \delta \alpha = a - \dfrac{\alpha}{\beta}b; \tag{39}$

it is easily checked that such $\gamma$ and $\delta$ satisfy (34); thus is item (3) resolved.

At this point item (4) easily follows, since the mapping

$\gamma + i \delta \longleftrightarrow \gamma I + \delta J \tag{40}$

is an isomorphism from $\Bbb C$ to the set of matrices

$\gamma I + \delta J \in M_{2 \times 2}(\Bbb R). \tag{41}$

More on Matrices Representing Complex Numbers