Matrix square roots of -I

242 Views Asked by At

Since we can see matrices as generalizations of complex numbers, I asked myself if there is a way to classify those matrices which are the "Basis" for the complex part. That is, I would like to identify the set of $n\times n$ real valued matrices $M$ whose square $M^2$ is equal to $-I$, where $I$ is the $n\times n$-identity matrix. Is this set already classified?

The matrix $J = \left( \begin{smallmatrix}0 & -1\\1 & 0 \end{smallmatrix} \right)$ satisfies $J^{2} = -I$ in the 2-dimensional case.

EDIT:

Thanks to your comments and answers I have the following observation (if its wrong, please tell me):

Suppose that $M$ satisfies that $M^t=-M$, then we can observe for two vectors $x,y\in\mathbb{R}^{2n}$ that

$y^tAx=(Ax)^{t}y=x^{t}A^ty=-x^tAy$

My interpretation is that if the angle between $x$ and $Ay$ is $\alpha$ then $\pi+\alpha$ is the angle between $Ax$ and $y$, since ${\displaystyle \cos \;x=-\cos(x+\pi )}$ (here we suppose that the length of the vectors are not relevant). If now $y=x$ and then we can see that $Ax$ is orthogonal to $x$.

Is it true that $A^2$ will be a rotation of 180 degrees, right? Because of vector length we will have something like $A^2=-CI$, where $C$ is just a constant. Is that true?

3

There are 3 best solutions below

0
On

Here's a solution for the $n=2$ case (too long for the comments, sorry):

Using the determinant condition I gave in the comments, and letting $$A = \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix}$$ we have $$\det(tI-A) = t^2-a_{22}t-a_{11}t+a_{11}a_{22}-a_{12}a_{21} = t^2+1$$ Equating coefficients gives $-a_{11}=a_{22}$ and $a_{11}a_{22}-a_{12}a_{21}=1$. Rearranging a bit gives the matrix $$ A = \begin{pmatrix} a_{11} & \frac{-a_{11}^2-1}{a_{21}} \\ a_{21} & -a_{11} \end{pmatrix} $$ for $a_{21} \in \mathbb{R}, a_{21} \neq 0$ and $a_{11} \in \mathbb{R}$. So in the case of the example you gave, $a_{11}=0$ and $a_{21}=1$.

1
On

You might be interested in complex structures on real vector spaces.

The basic idea is this: suppose I have a complex vector space $V$ of dimension $n$. If I forget how to multiply by $i$, and only use scalars that are real, then $V$ is also a real vector space of dimension $2n$ (let's call this $V_{\mathbb{R}}$, to make it clear I've forgotten anything complex). An interesting question is: what exactly did I forget when I forgot how to multiply by $i$? Can I remember it for later, and get back my original complex vector space?

On the complex vector space $V$, there is a linear map $J: V \to V$ which is multiplication by $i$, defined by $Jv = iv$. In fact, this map is $\mathbb{R}$-linear, and so gives a linear map $J: V_\mathbb{R} \to V_\mathbb{R}$. We can check that $J^2v = i^2v = -v$, and so $J^2 = -\mathrm{id}_V$. This map $J$ is exactly what we forgot when we forgot how to multiply by $i$: if I have a vector $v \in V_{\mathbb{R}}$ and want to scale it by $(a + bi)$, then I use $(a \,\mathrm{id}_V + bJ)v$.

Your question is essentially going the opposite way: starting with a $2n$-dimensional real vector space $W$, what are the kind of maps $J: W \to W$ such that $(W, J)$ becomes a complex vector space with action $(a + bi)w = (a \, \mathrm{id}_W + bJ)w$.

0
On

If you want to make progress, you will need to read some books...

The matrices $M\in M_2(\mathbb{R})$ s.t. $M^2=-I_2$ are similar to your $J$ (there are an infinity of such matrices).

Yet, if you consider that $\mathbb{R}^2$ is euclidean (with an inner product), then it is natural to consider the solutions $M$ that are normal ($MM^T=M^TM$). Then you find only $2$ solutions: $\pm J=Rot(\pm \pi/2)$, that are skew-symmetric. If we consider the isomorphism $f:a+ib\in \mathbb{C}\rightarrow \begin{pmatrix}a&-b\\b&a\end{pmatrix}=aI+bJ\in M_2(\mathbb{R})$, then $\pm J=\pm f(i)$ (cf. the Joppy's post). Of course, if $M$ is skew-symmetric ($a=0$), then $M^2=-b^2I$; yet, it's false when $n>2$.

Now, in dimension $2n$, the matrices s.t. $M^2=-I_{2n}$ are similar to $diag(J_1,\cdots,J_n)$ where $J_k=J$.

In the euclidean case, where $M$ is normal, then the solutions are skew-symmetric and in the form $Pdiag(J_1,\cdots,J_n)P^T$ where $P$ is an orthogonal matrix; that is to say $\mathbb{R}^{2n}$ is an orthogonal sum of $n$ planes $\Pi_k$ s.t., over each $\Pi_k$, the linear transformation is $Rot(\pm \pi/2)$.