The determinant of $T: \mathbb C^2\to \mathbb C^2$ as an $\mathbb R$-linear operator

753 Views Asked by At

Suppose the determinant of a $\mathbb C$-linear transformation $T:\mathbb C^2\to \mathbb C^2$ is $a+bi$. I'm trying to prove that when $\mathbb C^2$ is identified with $\mathbb R^4$, the determinant of the $\mathbb R$-linear transformation $T:\mathbb R^4\to \mathbb R^4$ is $a^2+b^2$.

I started off with a lower dimensional case: a complex-linear operator $T': \mathbb C\to \mathbb C$ must be multiplication by a complex number $a+bi$, and if I write the matrix of $T'$ w.r.t. the basis $(1,i)$ of $\mathbb C$ over $\mathbb R$, then the determinant of the matrix is $a^2+b^2$.

In higher dimensional case, the complex-linear $T$ is multiplication by a $2\times 2$ matrix $$A=\begin{bmatrix}x_1+ix_2&z_1+iz_2\\y_1+iy_2&w_1+iw_2\end{bmatrix}.$$ The set $((1,0)^T,(i,0)^T,(0,1)^T,(0,i)^T)$ would be an $\mathbb R$-basis of $\mathbb C^2$. W.r.t. this basis, the matrix of $T$ is $$A'=\begin{bmatrix}x_1&-x_2&z_1&-z_2\\x_2&x_1&z_2&z_1\\y_1&-y_2&w_1&-w_2\\y_2&y_1&w_2&w_1\end{bmatrix}.$$

Am I supposed to compute the determinant of this last matrix and make sure it equals $a^2+b^2$ provided $\det A=a+bi$? It seems like a lot of calculations.

5

There are 5 best solutions below

0
On

Let do this in full generality. Consider a matrix $Z=A+iB$ of shape $n\times n$ with entries in $\Bbb C$, isolate then $A,B$ from it with entries in $\Bbb R$. Then $Z$ induces a linear map $\Bbb C^n\to\Bbb C^n$, thus it induces by using the forgetful functor "$!$" a map, $$ !Z:(\Bbb R^2)^n\to (\Bbb R^2)^n\ , $$ and thus a matrix, after fixing some identification $\Bbb R^{2n}\to\Bbb R^{2n}$, and we are considering the determinant of this map.

Using the basis generalized from the one in case $n=2$ in the posted question, we get the matrix $$ !Z= \begin{bmatrix} A & -B\\B&A \end{bmatrix} \ , $$ and we want to compute its determinant. (With respect to some/any basis). Let $E$ (German notation, it is here better than the international notation $I$,) be the unit matrix of shape $n\times n$. We have the following equalities between block matrices: $$ \begin{aligned} \underbrace{\begin{bmatrix} A & -B\\B&A \end{bmatrix}}_{2\times 2} \underbrace{\begin{bmatrix} E \\ -iE\end{bmatrix}}_{2\times 1} &= \underbrace{\begin{bmatrix} E \\ -iE\end{bmatrix}}_{2\times 1} \underbrace{\begin{bmatrix} A +iB \end{bmatrix}}_{1\times 1} \ , \\ \underbrace{\begin{bmatrix} A & -B\\B&A \end{bmatrix}}_{2\times 2} \underbrace{\begin{bmatrix} E \\ iE\end{bmatrix}}_{2\times 1} &= \underbrace{\begin{bmatrix} E \\ iE\end{bmatrix}}_{2\times 1} \underbrace{\begin{bmatrix} A -iB \end{bmatrix}}_{1\times 1} \ , \qquad\text{(conjugated version)}\\ & \qquad\text{so putting all together in one block matrix computation}\\ \underbrace{\begin{bmatrix} A & -B\\B&A \end{bmatrix}}_{2\times 2} \underbrace{\begin{bmatrix} E & E\\ -iE & iE\end{bmatrix}}_{2\times 2} &= \underbrace{\begin{bmatrix} E & E\\ -iE & iE\end{bmatrix}}_{2\times 2} \underbrace{\begin{bmatrix} A +iB & \\ & A -iB\end{bmatrix}}_{2\times 2} \ , \\ &\qquad\text{so we have after base change with $\begin{bmatrix} E & E\\ -iE & iE\end{bmatrix}$ the similitude} \\ \begin{bmatrix} A & -B\\B&A \end{bmatrix} &\sim \begin{bmatrix} A +iB & \\ & A -iB\end{bmatrix} \ ,\qquad\text{so} \\ \det\begin{bmatrix} A & -B\\B&A \end{bmatrix} &= \det\begin{bmatrix} A +iB & \\ & A -iB\end{bmatrix} \\ &=\det(A+iB)\cdot\det(A-iB) \\ &=\det(A+iB)\cdot\overline{\det(A+iB)} \\ &=|\ \det(A+iB)\ |^2 \ . \end{aligned} $$ Note: Structurally this means the following. We start with $\require{AMScd}$ \begin{CD} !\Bbb C^n @>!Z>> !\Bbb C^n\\ @V \cong V V @VV \cong V\\ \Bbb R^{2n} @>> W> \Bbb R^{2n} \end{CD} where $W$ is a matrix that we write down when the choice of the base change matrix for the two same vertical isomorphisms is fixed. Here, $\det W$ does not depend on the two same $\cong$ arrows.

We need $\det W$. The idea is to extend once more the field of scalars, from $\Bbb R$ to $\Bbb C$! The determinant remains. (This is similar to the fact, that the determinant of a matrix with entries in $\Bbb Q$ is the same one, if we consider all entries to be in $\Bbb R$ or $\Bbb C$...)

So we formally tensor over $\Bbb R$ with $\Bbb C$. The functorially induced diagram is $\require{AMScd}$ \begin{CD} @. !\Bbb C^n\otimes_{\Bbb R}\Bbb C @>!Z\otimes\operatorname{id}>> !\Bbb C^n\otimes_{\Bbb R}\Bbb C\\ @. @V \cong V V @VV \cong V\\ \Bbb C^{2n} @= \Bbb R^{2n}\otimes_{\Bbb R}\Bbb C @>> W> \Bbb R^{2n}\otimes_{\Bbb R}\Bbb C @= \Bbb C^{2n} \end{CD} But now we are free to choose the matrix $W$ using the $\cong$ arrows with entries in $\Bbb C$. And it turns out that we can make the choice such that over $\Bbb C$: $\require{AMScd}$ \begin{CD} !\Bbb C^n\otimes_{\Bbb R}\Bbb C @>!Z\otimes\operatorname{id}>> !\Bbb C^n\otimes_{\Bbb R}\Bbb C\\ @V \cong V V @VV \cong V\\ \Bbb C^{2n} @>> \begin{bmatrix} Z&\\&\bar Z\end{bmatrix}> \Bbb C^{2n} \end{CD} (Note: Such simple linear algebra "computations" may become structurally important, e.g. when studying Hodge structures... This is the only reason for the categorial overkill, that would be misplaced without this connection.)

0
On

Here is a terrible and lazy proof. If $W$ is a complex vector space, write $W^!$ for $W$ considered as a real vector space.

Given an element $\lambda$ of $\mathbb{C}^\times$, pick any $T$ with that element as its determinant (clearly such a $T$ exists), and consider the determinant of the linear transformation $T$ acting on the underlying real vector space $V^!$. I claim that this new determinant does not depend on the choice of $T$, but only on the element $\lambda$. Indeed, the map induced by $T$ on $\bigwedge^4 V^!$ is the same as the map induced by $\lambda$ on $\bigwedge^2 (\bigwedge^2 V)^!$ under the canonical isomorphism between the latter and $\bigwedge^4 V^!$.

It follows that we may check the claim on transformations of the form $$ \begin{pmatrix} \lambda & 0 \\ 0 & 1\\ \end{pmatrix} $$ for which it reduces to the case you already checked.

0
On

The map responsible for sending $A \mapsto A'$ is actually a group isomorphism $\boldsymbol{\Psi}: \operatorname{GL}(n, \mathbb{C}) \to \operatorname{GL}(2n, \mathbb{R})$. Moreover, $\det$ is a group homomorphism from $\operatorname{GL}(2n, \mathbb{R}) \to \mathbb{R}$.

The elementary matrices are the generators of the general linear group. Let $E \in \operatorname{GL}(n, \mathbb{C})$ be any elementary matrix.

  • If $E$ is a column-switched matrix, it is easy to show $|\det(E)| = \det(\boldsymbol{\Psi}(E)) = 1$.

$$ \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \mapsto \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \end{bmatrix} $$

The matrix on the left has two columns swapped, so the determinant is $-1$. The one on the right has two pairs of columns swapped, so the determinant is $1$.

  • If $E$ is a column-added matrix, it is easy to show $|\det(E)| = \det(\boldsymbol{\Psi}(E)) = 1$.

$$ \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & x+yi & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \mapsto \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & x & -y & 1 & 0 & 0 & 0 \\ 0 & 0 & y & x & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \end{bmatrix} $$

The matrix on the right trivially has determinant $1$. Now here is the meat of the problem.

  • If $E$ is a column-multiplied matrix with factor $x+yi$, it is easy to show $|\det(E)| = \det(\boldsymbol{\Psi}(E)) = x^2+y^2$.

$$ \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & x+yi & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \mapsto \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & x & -y & 0 & 0 \\ 0 & 0 & 0 & 0 & y & x & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \end{bmatrix} $$

Doing the normal determinant expansion, it becomes clear the determinant on the matrix on the right is $x\cdot x - (-y)\cdot y = x^2+y^2$.

Thus, $\det \circ \boldsymbol{\Psi} = |\det|$ on the generators of $\operatorname{GL}(n, \mathbb{C})$, which extends to the entire space.

Thus, if $M \in \operatorname{GL}(n, \mathbb{C})$ satsifies $\det(M) = a+bi$, then

$$\det(\boldsymbol{\Psi} M) = |\det M| = a^2 + b^2.$$

0
On

Assume w.l.o.g. that $A$ is in Jordan normal form, specifically, that it is upper triangular: $$A = \begin{bmatrix} \lambda_1 & * \\ 0 & \lambda_2 \end{bmatrix}$$ so that we have $\det A = \lambda_1\lambda_2 = a+ib$. Its eigenvalues can, of course, be complex: $\lambda_j=x_j+iy_j$, $x,y\in\mathbb R$. The corresponding real matrix is then block upper-triangular: $$A' = \begin{bmatrix} C_1 & * \\ 0 & C_2 \end{bmatrix}$$ with $$C_j = \left[\begin{array}{lr} x_j & -y_j \\ y_j & x_j \end{array}\right], \det C_j = \lambda_j\overline\lambda_j.$$ We then have $$\det A' = \det C_1 \det C_2 = \lambda_1 \overline\lambda_1 \lambda_2 \overline\lambda_2 = (\det A) (\overline{\det A}) = (a+ib)(a-ib) = a^2+b^2.$$

0
On

Suppose $A \in \mathbb{C}^{n \times n}$. Define $\phi:\mathbb{R}^{2n} \to \mathbb{C}^n $ as $\phi((x_1,y_1,...,x_n,y_n)) = \sum_k (x_k+iy_k) e_k$. It is straightforward to see that $\phi$ is linear and invertible (the inverse being real linear).

Write the basis $e_1,...,e_{2n} $ of $\mathbb{R}^{2n}$ as $u_1,v_1,...,u_n,v_n$.

The corresponding real basis of $\mathbb{C}^n$ is $e_1 = \phi(u_1),ie_1= \phi(v_1),...,e_n=\phi(u_n),ie_n=\phi(v_n)$.

We want to compute $\det \tilde{A}$, where $\tilde{A} = \phi^{-1}\circ A \circ \phi$.

If $V$ is invertible, note (some expansion needed) that $\det(\phi^{-1}\circ V^{-1}A V \circ \phi) = \det \tilde{A}$, hence we can choose $A$ to have whatever form suits, in this case the Jordan normal form.

Note that \begin{eqnarray} \tilde{A}(\alpha u_k+\beta v_k) &=& \phi^{-1}(A (\alpha u_k+i\beta v_k)) \\ &=& \phi^{-1}(\alpha \operatorname{re}(A e_k) - \beta \operatorname{im}(A e_k)+ i [ \alpha \operatorname{im}(A e_k) + \beta \operatorname{re}(A e_k) ] ) \\ &=& \phi^{-1}( \sum_i (\alpha \operatorname{re}[A]_{ik} - \beta \operatorname{im}[A]_{ik}) e_i + (\alpha \operatorname{im}[A]_{ik} + \beta \operatorname{re}[A]_{ik} ) i e_i) \\ &=& \sum_i (\alpha \operatorname{re}[A]_{ik} - \beta \operatorname{im}[A]_{ik}) u_i + (\alpha \operatorname{im}[A]_{ik} + \beta \operatorname{re}[A]_{ik} ) v_i \end{eqnarray} In particular, the representation of the map from the subspace spanned by $u_k,v_k$ to the coordinates of $u_i,v_i$ is given by the block matrix ${\bf \tilde A}_{ik} = \begin{bmatrix} \operatorname{re}[A]_{ik} & -\operatorname{im}[A]_{ik} \\ \operatorname{im}[A]_{ik} & \operatorname{re}[A]_{ik} \end{bmatrix}$ Hence, if $A$ is in Jordan normal form, then $\tilde{A}$ is an upper triangular block matrix, with each block being a $2 \times 2$ real matrix of the above form. Furthermore, the diagonal blocks are of the form $\begin{bmatrix} \operatorname{re} \lambda_k & -\operatorname{im} \lambda_k \\ \operatorname{im} \lambda_k & \operatorname{re} \lambda_k \end{bmatrix}$, where $\lambda_k$ are the eigenvalues of $A$.

Hence $\det {\tilde A} = \prod_k \det \begin{bmatrix} \operatorname{re} \lambda_k & -\operatorname{im} \lambda_k \\ \operatorname{im} \lambda_k & \operatorname{re} \lambda_k \end{bmatrix}= \prod_k |\lambda_k|^2$.

Since $\det A = \prod_k \lambda_k$, we have the desired result.