Proof that this diagram commutes

428 Views Asked by At

This is an exercise in a book I'm reading:

Define an $\mathbb R$-linear isomorphism $f_n : \mathbb C^n \to \mathbb R^{2n}$ by $$(x_1 + iy_1, x_2 + iy_2, \dots) \mapsto (x_1, y_1, x_2, y_2, \dots )$$

Let $A \in M^n (\mathbb C)$ and define $R_A(v) = v \cdot A$. Furthermore define an injective homomorphism $\rho_n : M^n(\mathbb C) \to M^{2n}(\mathbb R)$ as $A_{ij}\mapsto \left(\begin{array}{cc} a_{ij} & b_{ij} \\ -b_{ij} & a_{ij} \end{array}\right)$ if $A_{ij}=(a_{ij} + i b_{ij})$. Show that the following diagram commutes: $\require{AMScd}$ \begin{CD} \mathbb C^n @>\displaystyle f_n>> \mathbb R^{2n}\\ @V \displaystyle R_A V V\# @VV \displaystyle{R_{\rho_n(A)}} V\\ \mathbb C^n @>>\displaystyle f_n> \mathbb R^{2n} \end{CD}

So I thought maybe induction could yield a short proof: can you tell me if this is correct:

It is easy to show that this is true if $n=1$. We assume that it is true for $n\le N$ and we consider the case $n=N+1$. Let $A$ be an $N+1$-square matrix. We divide it into $4$ blocks of sizes $2\times 2$ and $N-1 \times N-1$. By induction hypothesis when restricted to each block the diagram commutes. Hence the diagram commutes for $n=N+1$.

2

There are 2 best solutions below

4
On BEST ANSWER

As HTFB says, your induction seems wrong. You might be able to get away with an induction in which on an $N \times N$ matrix, you take four overlapping $(N-1) \times (N-1)$ submatrices, each anchored in one corner.

But an alternative is to just write things out. I'll do half of it.

Fix $n > 0$. Suppose that $k$ is an integer between 0 and $n-1$ (I'm going to use 0-based indexing, so the upper left corner of a matrix is $m_{00}$, to make things easier). I'm also going to write $f$ instead of $f_n$. Let suppose that the entries of the vector $v$ are

$$ v_i = x_i + \mathbf i y_i $$

Then we have $$ R_A(v)_k = \sum_{i=0}^{n-1} v_i A_{ik}. $$ And $$ f(v)_{2k} = Re(v_k) = x_k \\ f(v)_{2k+1} = Im(v_k) = y_k. $$ And finally, \begin{align} {\rho(A)}_{2k, 2s} &= a_{ks} \\ {\rho(A)}_{2k+1, 2s} &= -b_{ks} \\ {\rho(A)}_{2k, 2s+1} &= b_{ks} \\ {\rho(A)}_{2k+1, 2s+1} &= a_{ks}. \end{align}

And then \begin{align} f(R_A(v))_{2k} &= Re(R_A(v)_k) \\ &= Re(\sum_{i=0}^{n-1} v_i A_{ik})\\ &= \sum_{i=0}^{n-1} Re(v_i A_{ik})\\ &= \sum_{i=0}^{n-1} \left( Re(v_i) Re( A_{ik}) - Im(v_i) Im( A_{ik}) \right)\\ &= \sum_{i=0}^{n-1} \left( x_i a_{ik} - y_i b_{ik} \right)\\ \end{align}

Now we just need to find $R_{\rho(A)}(f(v))$ and compare it to this. Well, \begin{align} R_{\rho(A)}(f(v))_{2k} &= (f(v) \cdot \rho(A))_{2k} \\ &= \sum_{j=0}^{2n-1} f(v)_j \rho(A)_{j,2k}. \end{align} I'm going to split this last sum into a sum with even indices and an sum with odd indices: \begin{align} R_{\rho(A)}(f(v))_{2k} &= \sum_{j=0}^{2n-1} f(v)_j \rho(A)_{j,2k}\\ &= \sum_{i=0}^{n} f(v)_{2i} \rho(A)_{2i,2k} + \sum_{i=0}^{n} f(v)_{2i+1} \rho(A)_{2i+1,2k}. \end{align}

Applying the formulas for elements of $f(v)$ from above, this becomes \begin{align} R_{\rho(A)}(f(v))_{2k} &= \sum_{i=0}^{n} x_i \rho(A)_{2i,2k} + \sum_{i=0}^{n} y_i \rho(A)_{2i+1,2k} \end{align} And now we can apply the formulas for the entries of $\rho(A)$ to get \begin{align} R_{\rho(A)}(f(v))_{2k} &= \sum_{i=0}^{n} x_i \rho(A)_{2i,2k} + \sum_{i=0}^{n} y_i \rho(A)_{2i+1,2k}\\ &= \sum_{i=0}^{n} x_i a_{ik} + \sum_{i=0}^{n} y_i (-b_{ik}) \\ &= \sum_{i=0}^{n} (x_i a_{ik} - y_i b_{ik}). \end{align} And now we just notice that those two things are equal.

Then we repeat the whole exercise for the $2k+1$ entry of the vector in the lower right, and we're done. I'm not going to write that out, and indeed, I'm not sure why I wrote this out, except that if you get rid of the summation limits, and compress a few steps into one, this really becomes pretty straightforward. Indeed, this is the sort of thing that probably shouldn't be given an explicit proof except in, say, an introductory linear algebra text where you're first talking about complex vector spaces; in any paper to be read by professional mathematicians, it's really not necessary.

3
On

Considering $\mathbb{C}$ as a $2$-dimensional vector space over $\mathbb{R}$, its action on itself by multiplication gives an embedding $\rho:\mathbb{C} \to M_2(\mathbb{R})$; explictly, $$\rho(x + iy) = \begin{pmatrix} x & y \\ -y & x \end{pmatrix}$$ (modulo an irrelevant choice of sign for $y$, at least), as you describe. The map $\rho$ then induces an isomorphism of complex vector spaces $f:\mathbb{C}^n \to M^n$, where $M = \mathbb{R}^2$ with $\mathbb{C}$-action $z.x = \rho(z)x$. Thus for any $\mathbb{C}$-linear map $\alpha:\mathbb{C}^n \to \mathbb{C}^n$, the diagram $\require{AMScd}$ \begin{CD} \mathbb C^n @>\displaystyle f>> M^n\\ @V \displaystyle \alpha V V @VV \displaystyle\alpha' V\\ \mathbb C^n @>>\displaystyle f> M^n \end{CD} clearly commutes, where $\alpha'(x) = f \alpha f^{-1}(x)$. Explicitly, for a map $\alpha$ with $i$th component $\alpha(x)_i = g_{ij} x_j$ for some fixed $g\in M_n(\mathbb{C})$, the corresponding map $\alpha'$ has $i$th component $$\alpha'(fx)_i = f(g_{ij} x_j)_i = g_{ij}.f(x_j) = \rho(g_{ij}) f(x_j).$$ Unraveling the notation, the commutative diagram above is then just your original diagram.