Finding an almost complex structure (aka anti-involution) given an involution

169 Views Asked by At

I started studying the book of Daniel Huybrechts, Complex Geometry An Introduction. I tried studying backwards as much as possible, but I have been stuck on the concepts of almost complex structures and complexification. I have studied several books and articles on the matter including ones by Keith Conrad, Jordan Bell, Gregory W. Moore, Steven Roman, Suetin, Kostrikin and Mainin, Gauthier

I have several questions on the concepts of almost complex structures and complexification. Here are some:

Assumptions, definitions and notations: Let $V$ be an $\mathbb R$-vector space. Define $K \in Aut_{\mathbb R} (V^2)$ as anti-involutive if $K^2 = -id_{V^2}$. Observe that $K$ is anti-involutive on $V^2$ if and only if $K$ is an almost complex structure on $V^2$. Let $\Gamma(V^2)$ be the $\mathbb R$-subspaces of $V^2$ that are isomorphic to $V$. Let $AI(V^2)$ and $I(V^2)$ be, respectively, the anti-involutive and involutive maps on $V^2$.

In another question, I ask if for every $A \in \Gamma(V^2)$ and $K \in AI(V^2)$, there exists a unique $\sigma \in I(V^2)$ such that the set of $\sigma$'s fixed points equals $A$ and such that $\sigma$ anti-commutes with $K$ (i.e. $\sigma \circ K = - K \circ \sigma$).

Now I ask:

  1. For every $A \in \Gamma(V^2)$ and $\sigma \in I(V^2)$ such that the set of $\sigma$'s fixed points equals $A$, does there exist a $K \in AI(V^2)$ such that $\sigma$ anti-commutes with $K$?

For Questions 2 and 3: Let $A \in \Gamma(V^2)$ and $\sigma \in I(V^2)$ such that the set of $\sigma$'s fixed points equals $A$. Suppose there exists a $K \in AI(V^2)$ such that $\sigma$ anti-commutes with $K$. Then $-K$ is another element of $AI(V^2)$ that $\sigma$ anti-commutes with.

  1. Are $\pm K$ the only elements $J \in AI(V^2)$ such that $\sigma$ anti-commutes with $J$?

  2. Suppose further that $K(A)$ equals the set of $-\sigma$'s fixed points (or maybe there's no need to suppose this). Observe $-K(A)=K(A)$. Are $\pm K$ the only elements $J \in AI(V^2)$ such that $\sigma$ anti-commutes with $J$ and the set of $-\sigma$'s fixed points equals $J(A)$?

2

There are 2 best solutions below

5
On

A complexified vector space $V$ is really the data of:

  1. A real vector space $V$,
  2. A choice of real subspaces $V_\mathrm{re}$ and $V_\mathrm{im}$ of $V$ such that $V = V_\mathrm{re} \oplus V_\mathrm{im}$,
  3. An isomorphism $\theta: V_\mathrm{re} \to V_\mathrm{im}$.

We can show that this data is equivalent to the data of:

  1. A real vector space $V$,
  2. A linear map $\sigma: V \to V$ satisfying $\sigma^2 = \operatorname{id}_V$,
  3. A linear map $K: V \to V$ satisfying $K^2 = -\operatorname{id}_V$,
  4. And $\sigma$ and $K$ must anticommute: $\sigma K = - K \sigma$.

Proof: Starting with the first definition, we can define $K: V \to V$ on the direct sum $V = V_\mathrm{re} \oplus V_\mathrm{im}$ by setting $K(v_\mathrm{re} + v_\mathrm{im}) = - \theta^{-1}(v_\mathrm{im}) + \theta(v_\mathrm{re})$. We also define $\sigma: V \to V$ to act as the identity on $V_\mathrm{re}$ and $-1$ on $V_\mathrm{im}$. It is easy to verify that $K^2 = -\operatorname{id}_V$ and $\sigma^2 = \operatorname{id}_V$. To check the anti-commutativity, we have $$ \begin{aligned} v \in V_\mathrm{re} &\implies \sigma(K(v)) = \sigma(\theta(v)) = - \theta(v) = - K(v) = -K(\sigma(v)), \quad \text{and}\\ v \in V_\mathrm{im} &\implies \sigma(K(v)) = \sigma(-\theta^{-1}(v)) = - \theta^{-1}(v) = K(v) = -K(\sigma(v)). \end{aligned} $$ On the other hand, starting with the second definition we can define $V_\mathrm{re}$ as the 1-eigenspace of $\sigma$, and $V_\mathrm{im}$ as the $(-1)$-eigenspace of $\sigma$. For any $v \in V_\mathrm{re}$ we have $$ \sigma(Kv) = -K(\sigma v) = -Kv$$ showing that $Kv$ is in the $(-1)$-eigenspace of $\sigma$, i.e. $K(V_\mathrm{re}) \subseteq V_\mathrm{im}$. Doing the same for the imaginary part and applying $K^2 = - \operatorname{id}_V$ shows that $K$ restricts to an isomorphism $\theta: V_\mathrm{re} \to V_\mathrm{im}$.

Now we can answer your questions quickly.

  1. Yes. Choose $V_\mathrm{re}$ to be the fixed points of $\sigma$ and $V_\mathrm{im}$ to be the $(-1)$-eigenspace. Pick any isomorphism $\theta: V_\mathrm{re} \to V_\mathrm{im}$ and define $K$ from $\theta$ in the same way as above.
  2. No, given a fixed choice of half-dimensional non-intersecting subspaces $V_\mathrm{re}$ and $V_\mathrm{im}$, there are many isomorphisms $\theta: V_\mathrm{re} \to V_\mathrm{im}$, and each will give a different $K$.
  3. No, there are many for the same reason as 2.

To make things a little more concrete, let's use the first definition above to cook up a stupid complexified structure on $\mathbb{R}^2$. Let $$ V_\mathrm{re} = \{(x, 0) \mid x \in \mathbb{R}\}, \quad V_\mathrm{im} = \{(x, x) \mid x \in \mathbb{R}\},$$ so that $V_\mathrm{re}$ is the $x$-axis and $V_\mathrm{im}$ is a diagonal line. This choice of subspaces should define our involution $\sigma$, which is easily checked to be the matrix $$ \sigma = \begin{pmatrix} 1 & -1 \\ 0 & -1 \end{pmatrix}. $$

Now we can pick a random isomorphism $\theta: V_\mathrm{re} \to V_\mathrm{im}$, say $\theta(x, 0) = (3x, 3x)$. It then follows that $K$ is defined by the matrix $$ K = \begin{pmatrix} 3 & -\frac{10}{3} \\ 3 & -3 \end{pmatrix}. $$ As you can see, there is a lot of freedom here for these choices.

0
On

As a supplement to Joppy's answer:

Let $V$ be an $\mathbb R$-vector space. I will show that if we have a literal internal direct sum of $\mathbb R$-subspaces $V = S \bigoplus U$ or equivalently if there exists $\sigma \in I(V)$, then we have a bijection between every possible isomorphism $\theta: S \to U$ as an isomorphism and every possible $K \in AI(V)$ without axiom of choice. In doing so, I kind of split Joppy's answer in half.

Part I. The existence of $S$ and $U$ such that $V = S \bigoplus U$, whether or not $S \cong U$, is equivalent to existence of some $\sigma \in I(V)$: Given the direct sum, there exists a unique $\sigma \in I(V)$ such that $\sigma|_S = id_S$ and $\sigma|_U = -id_U$. Given the $\sigma$, choose $S=fixed(\sigma)$ and $U=fixed(-\sigma)$.

Part II. Bijection using $V=S \bigoplus U$ but not existence of $\sigma$

  • With Part I in mind: We have that '$K(S) \subseteq U$ and $K(U) \subseteq S$' is I think the alternative to saying $K$ anti-commutes with $\sigma$, which we can't quite say since we're trying to not think about $\sigma$ here. Anyway, $K$ anti-commutes with $\sigma$ if and only if $K$ anti-preserves subspaces $S$ and $U$ (See II.3 here).

  • Here, I will show that isomorphisms $\theta: S \to U$ are in bijection with anti-involutory automorphisms $K: V \to V$ such that $K(S) \subseteq U$ and $K(U) \subseteq S$. I will try to not use the existence of $\sigma$.

  • From $\theta$ to $K$: Choose $K(s \oplus u) = - \theta^{-1}(u) \oplus \theta(s)$.

  • From $K$ to $\theta$: From $K(S) \subseteq U$ and $K(U) \subseteq S$, we get (see here) $K(S)=U$ by applying $K$ to the latter set inequality. Choose $\theta = \tilde{K|_S}: S \to U$, the range restriction of $K|_S: S \to V$.

Part III. Bijection using existence of $\sigma$ but not $V=S \bigoplus U$:

  • Here, I will show that anti-involutory automorphisms $K: V \to V$ that anti-commute with $\sigma$ are in bijection with isomorphisms $\theta: fixed(\sigma) \to fixed(-\sigma)$. I will try to not use that $fixed(\sigma) \bigoplus fixed(-\sigma)=V$.

  • Note that '$K(fixed(\sigma)) \subseteq fixed(-\sigma)$ and $K(fixed(-\sigma)) \subseteq fixed(\sigma)$' is equivalent to '$K$ anti-commutes with $\sigma$'

  • From $\theta$ to $K$: Kind of stuck here.

    • I'm not sure how we can do this without using $fixed(\sigma) \bigoplus fixed(-\sigma)=V$ unless we can somehow say that at the very least $fixed(\sigma) \cap fixed(-\sigma) = 0$ and then $span(fixed(\sigma) \bigoplus$ $fixed(-\sigma))=V$ or something. My thought is say there exists a unique almost complex structure $K$ defined on all of $V$ such that $K$ defined on $fixed(\sigma)$ is $K(v)=\theta(v)$.

    • Maybe this is valid to use and that the original bijection to show is to not use that $V$ is arbitrarily decomposed. Then, I'm not using some assumed arbitrary decomposition of $V$, I'm deducing a specific decomposition of $V$. In this case, just do $K(s \oplus u) = - \theta^{-1}(u) \oplus \theta(s)$ again.

  • From $K$ to $\theta$: From $K(fixed(\sigma)) \subseteq fixed(-\sigma)$ and $K(fixed(-\sigma)) \subseteq fixed(\sigma)$, we get (see here) that $fixed(-\sigma) = K(fixed(\sigma))$ by applying $K$ to the latter set inequality. Choose $\theta = \tilde{K|_{\{fixed(\sigma)\}}}: fixed(\sigma) \to fixed(-\sigma)$, the range restriction of $K|_{\{fixed(\sigma)\}}: fixed(\sigma) \to V$.

Part IV. About the example,

  1. I believe $\sigma$ is supposed to have its upper right as $-2$ and not $-1$.

  2. For each $\theta$, there exists a unique $\tilde a \in \mathbb R \setminus 0$ such that for all $x \in \mathbb R$, $\theta\begin{bmatrix} x\\ 0 \end{bmatrix}=\tilde a\begin{bmatrix} x\\ x \end{bmatrix}$ or equivalently $\theta^{-1}\begin{bmatrix} x\\ x \end{bmatrix}=\frac{1}{\tilde{a}} \begin{bmatrix} x\\ 0 \end{bmatrix}$.

  3. From $K$ to $\theta$: Given $K$, '$K$ anti-commutes with $\sigma$' is equivalent to 'there exists unique $\tilde b \in \mathbb R \setminus 0$ such that for all $x \in \mathbb R$, $K\begin{bmatrix} x\\ x \end{bmatrix}= -\frac{1}{\tilde b}\begin{bmatrix} x\\ 0 \end{bmatrix}$ and $K\begin{bmatrix} x\\ 0 \end{bmatrix}= \tilde b\begin{bmatrix} x\\ x \end{bmatrix}$'. In this case, we have for $K = \begin{bmatrix} a & b\\ c & d \end{bmatrix}$ that $\tilde b = a = c$ (in addition to $a^2+bc+1=0$ and $d=-a$). Choose $\tilde a = \tilde b$.

  4. From $\theta$ to $K$: Choose $K$ as either of the 2 equivalent maps:

    • 4.1a. $K(\begin{bmatrix} x\\ y\end{bmatrix}$ $= \begin{bmatrix}x-y\\ 0\end{bmatrix} \oplus \begin{bmatrix}y\\ y\end{bmatrix})$ $:= -\theta^{-1}\begin{bmatrix}y\\ y\end{bmatrix} \oplus \theta \begin{bmatrix}x-y\\ 0\end{bmatrix}$ $= -\frac{1}{\tilde a}\begin{bmatrix}y\\ 0\end{bmatrix} \oplus \tilde a \begin{bmatrix}x-y\\ x-y\end{bmatrix}$ $=\begin{bmatrix} a & b\\ c & d \end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix}$, with $a=\tilde a=c=-d$ and $b=-(\tilde a + \frac{1}{\tilde a})$

    • 4.1b. The unique map such that $K=\begin{bmatrix} a & b\\ c & d \end{bmatrix}$, $K^2=-I_2$, $K\begin{bmatrix}x\\ 0\end{bmatrix} = \theta \begin{bmatrix}x\\ 0\end{bmatrix}$