A weird dilemma about the basis for a Lie algebra and Lie algebra decomposition

166 Views Asked by At

Hello differential geometry and Lie theory experts,

I have a bit long question, so I organize it as follows:

-Required setup

  1. Let $G$ be a locally compact and simply connected Lie group.
  2. Let $\mathfrak{g}$ be $G$‘s underlying Lie algebra.
  3. Suppose that $\mathfrak{g}=\text{span}\left\{v_1, \dots, v_m\right\}$, where $[v_i, v_j]$ is not necessarily $=0$ for any $(i,j)$ pair.
  4. $[.,.]$ is Lie bracketing operation.

-The statement

If $\left\{\omega_1, \dots, \omega_r \right\}$ is a basis that spans $\mathfrak{g}$, s.t. $\left[\omega_i, \omega_j \right] = 0,\; \forall\; 1\leq i,j\leq r$, then $G=H_1\times\dots\times H_r$, where each $H_i$ is a one-parameter subgroup of $G$ and has an underlying Lie algebra $\mathfrak{h}_i$, s.t. $\mathfrak{h}_i=\text{span}\left\{ \omega_i \right\},\; \forall i=1,\dots,r$. Furthermore; $\mathfrak{g}=\oplus_{i=1}^r \mathfrak{h}_i$.

-Reasons that prove the Statement above to be wrong

As I lay out my reasons in the next section, to me, the statement given above seems correct. On the other hand, it is false due to many reasons given in Brian C. Hall’s great book: Lie Groups, Lie Algebras, and Representations:

  1. Any compact and simply connected Lie group $G$ is a semisimple Lie group by Proposition-7.7 and by Theorem-7.8 any semisimple Lie algebra could be decomposed into simple Lie groups.
  2. By definition in Chapter-3, simple Lie algebra, $\mathfrak{h}$, is an irreducible Lie algebra whose $\text{dim }\mathfrak{h} \geq 2$.
  3. By construction, one-dimensional Lie algebras are in the centre of Lie algebra $\mathfrak{g}$.

All in all; my decomposition, $\mathfrak{g}=\oplus_{i=1}^r \mathfrak{h}_i$, given in the above statement fails so long as $G$ is not an abelian group.

-Reasons that prove the Statement above to be true

  1. By Frobenius’ theorem (see Lemma F.2 and Corollary F.3 in Manifolds, Tensors, and Forms: an introduction for mathematicians and physicists), once $\mathfrak{g}$ is involutive (which is the case for $\mathfrak{g}$ is a Lie algebra), we can find a basis, $\left\{\omega_1, \dots, \omega_r\right\}$, s.t. $[\omega_i, \omega_j]=0,\; \forall\; 1\leq i,j\leq r$.
  2. It is easy to show that $\omega_i$ generates an ideal, $\mathfrak{h}_i,\; \forall\; 1 \leq i,j \leq r$.
  3. By Proposition-4.27 and Proposition-7.5 in Hall’s book, $\mathfrak{g}$ is a direct sum of ideals $\mathfrak{h}_i$. Furthermore, by Theorem-5.11 in Hall’s book, corresponding group $G=H_1 \times\dots\times H_r$.

-The Question

Am I confusing representation of a group with its underlying geometry? Am I mixing up general properties of a topological group with its representation in $GL(n, \mathbb{F})$? Do I miss something regarding the product space topology or its relevant representation theory? What is wrong with the statement I specify above and the reasons I list that prove it to be both wrong and right?

2

There are 2 best solutions below

1
On BEST ANSWER

Ok, so you're applying Frobenius' theorem to the distribution given by $\Delta_g=T_gG$ for all $g\in G$, which is automatically involutive. This tells you the following: there is a neighborhood $U$ of $e$ and vector fields $X_1,\dots,X_n$ on $U$ such that a) $X_1\vert_g,\dots,X_n\vert_g$ form a basis of $\Delta_g=T_gG$ for every $g\in U$ (i.e. the vector fields constitute a local frame) and b) $[X_i,X_j]=0$ for $1\le i,j\le n$. This is neat, but it has nothing to do with the Lie algebra of $G$! The Lie algebra consists of left-invariant vector fields on $G$ and those vector fields we have just obtained are neither globally defined on $G$, nor are they in general left-invariant. Indeed, as you note, if we could find a basis $Y_1,\dots,Y_n$ of the Lie algebra $\mathfrak{g}$ of $G$ such that $[Y_i,Y_j]=0$ for $1\le i,j\le n$, then the Lie bracket would vanish identically by bilinearity, i.e. $\mathfrak{g}$ would be abelian, but this is not always the case.

The "Statement" you give is still correct, but the "If" hypothesis is not always satisfied. In reality, all that statement really tells you is that the only simply connected abelian Lie groups are $\mathbb{R}^n$ for various $n$ (which can also be proven more elementarily). The true strength of Theorem 5.11 in Hall's book is that it applies to any decomposition of the Lie algebra into subalgebras.

0
On

First off, I'd like to thank you, Thorgott, for the great hint! I provide you with a proof which, I think, reflects what you mentioned above:

-Required Setup

  1. Let $\left\{v_1, \dots, v_m\right\}_{\vert_e}$ spans $\mathfrak{g}$ which is isomorphic to $TG\vert_e$ by Corollary-3.46 in Hall's book, where $\left[v_i,\,v_j\right]_{\vert_e} \in \mathfrak{g}$ is not necessarily vanishing for any $i$ and $j$, $v_i\vert_x \in \mathcal{L}(G)\;\; \forall\, i=1,\dots,m$, and $\mathcal{L}(G)$ is the space of left invariant vector fields on $G$.
  2. By Frobenius' theorem (see Lemma F.2 and Corollary F.3 in Renteln's book), we can always conclude that $\exists\, \left\{B_1, \dots, B_m\right\}_{\vert_e}$ that spans $\mathfrak{g}$ s.t. $\left[B_i, B_k\right]_{\vert_e} = 0 = \left[\sum_p\alpha_i^pv_p, \sum_p\alpha_k^pv_p\right]_{\vert_e},\;\; \forall\; i,k=1, \dots,m$.
  3. Let $L_g : G \rightarrow G$ be a left-translation diffeomorphism s.t. $L_g(h)=g.h$, where $g,h \in G$.
  4. Following the same notation in Olver's book, $v_i\vert_x = \sum_p \eta_i^p(x)\partial_p,\;\; \forall\; i=1,\dots,m$.
  5. Since $\left\{v_1, \dots, v_m\right\}_{\vert_e}$ spans $\mathfrak{g}$, $B_i\vert_e=\sum_j \alpha_i^j v_j\vert_e,\;\; \forall\; i=1,\dots,m$, where $\alpha_i^j \in \mathbb{C}$.
  6. Let $Y\vert_e \neq 0 \in \mathfrak{g}$, where $Y\vert_x \in \mathcal{L}(G)$. By Proposition-1.48 in Olver's book, every $Y\vert_x$ generates one-parameter subgroup s.t. $A(t)=exp\left(tY\vert_x\right)$, where $t \in (-\epsilon, \epsilon)$ and $A(t) \subseteq G$. That is to say, it is a local group at around $x \in G$.
  7. For simplicity, let $g$ in item 3 be equal to $exp\left(t_gY\vert_x\right)$ for a particular $t_g \in \mathbb{R}$.

-Proof

Let $B_i\vert_e$ and $B_k\vert_e$ be our two candidate vectors for a particular pair of $(i,k) \in \{1, \dots, m\}\times\{1, \dots, m\}$. Our big assumption:

Suppose that $B_i\vert_x$ and $B_k\vert_x \in \mathcal{L}(G)$.

Thus, what we expect is: $\left[dL_g(B_i\vert_e),\, dL_g(B_k\vert_e)\right] = \left[B_i\vert_e,\, B_k\vert_e\right]=0$, by assumption, where the first equality is due to (1.32) in Olver's book and $dL_g$ is the differential for $L_g$.

Now, let's push both $B_i\vert_e$ and $B_k\vert_e$ forward via $dL_g$. Since pushing $B_i\vert_e$ forward is similar to doing $B_k\vert_e$ so, let's show it for only $B_i\vert_e$:

$$ \begin{split} dL_g\left(B_i\vert_e\right) &= \sum_j B_i\left(L_g^j(x)\right)_{\vert_e}\partial_j \\ &= \sum_j\sum_p \alpha_i^pv_p\left(L_g^j(x)\right)_{\vert_e} \partial_j \\ &=\sum_j\sum_p\sum_t \alpha_i^p \eta_p^t(e)\frac{\partial L_g^j}{\partial x^t}(e) \partial_j - (\ast), \end{split} $$ where the first equality is due to (1.23) in Olver's book and the rest is by definition above. We should answer the question of what $L_g^j(h)$ and hence $\frac{\partial L_g^j}{\partial x^t}(h)$ look like: $$ L_g^j(h) = x^j(h) + t_g\xi^j(h) + \theta(t_g^2), $$ where we assume $Y|h$ given above in the setup is equal to $\sum_j \xi^j(h)\partial_j$ and $\mathcal{x}^j(h) = h^j$. It is simply the Taylor approximation of $exp\left(tY\vert_x\right)$ at $x$ evaluated at $t_g$. Then, this approximation of $g$ left-acts on $h \in G$. Therefore: $$ \frac{\partial L_g^j}{\partial x^t}(h) = \delta x^j_t + t_g\frac{\partial \xi^j}{\partial x ^t}(h) + \theta(t_g^2), $$ where $\delta x^j_t$ is delta function. Substitute this expression into $(\ast)$ above: $$ \begin{split} &= \sum_j\sum_p\sum_t \alpha_i^p \eta_p^t(e) \left[ \delta x^j_t + t_g\frac{\partial \xi^j}{\partial x ^t}(e) + \theta(t_g^2) \right] \partial_j \\ &= \sum_j\sum_p \alpha_i^p \eta_p^j(e) \partial_j + t_g \sum_j\sum_p\sum_t \alpha_i^p \eta_p^t(e)\frac{\partial \xi^j}{\partial x ^t}(e) \partial_j + \theta(t_g^2) \partial_j \\ &= B_i\vert_e + t_g\sum_j B_i(Y^j\vert_x)_{\vert_e} \partial_j-(\ast\ast), \end{split} $$ where $t_g$ is supposed to be sufficiently small. $(\ast\ast)$ applies to $B_k\vert_e$ as well. Now, we are in a position to define the Lie bracket as follows to see whether it still vanishes after translation to our new location, $\vert_g$: $$ \begin{split} \left[B_i\vert_g,\, B_k\vert_g\right] &= \left[ B_i\vert_e + t_g\sum_j B_i(Y^j\vert_x)_{\vert_e} \partial_j,\, B_k\vert_e + t_g\sum_j B_k(Y^j\vert_x)_{\vert_e} \partial_j \right] \\ &= \underbrace{\color{red}{\left[B_i\vert_e,\, B_k\vert_e\right]}}_{=0} + t_g\left[B_i\vert_e, \sum_j B_k(Y^j\vert_x)_{\vert_e} \partial_j\right] + t_g\left[\sum_j B_i(Y^j\vert_x)_{\vert_e} \partial_j, B_k\vert_e\right] \\ &+ \underbrace{\color{red}{\theta(t_g^2)}}_{\approx 0} \\ &\neq 0, \end{split} $$ where bilinearity of Lie bracket is applied to obtain the second equality. Once we expand the terms in black, we can see that the sum of these two terms do not have to be equal to $0$. Therefore, our initial assumption that $B_i\vert_x \in \mathcal{L}(G)$ and $B_k\vert_x \in \mathcal{L}(G)$, is false.