Understanding proof of PBW 1

138 Views Asked by At

This is first of series of questions I have in understanding PBW. I am following this note. Ring $R$ is assumed to be commutative.


Definition 1: Let $\mathfrak{g}$ be a Lie algebra over a commutative ring $R$, that is free as an $R$-module with basis $X$. Give a linear order to $X$. In the tensor algebra $\otimes^* \mathfrak{g}$, we define a reduced monomial as $x_1 \otimes \cdots \otimes x_n$ such that $x_1 \ge \cdots \ge x_n$. A tensor is irreducible if it can be generated by ordered monomials.

Definition 2: We define the universal enveloping algebra $U(\mathfrak{g})$ of $\mathfrak{g}$ as the quotient of $\otimes^* \mathfrak{g}$ by $I:= x \otimes y - y \otimes x - [x,y]$ where $[x,y]$ is the Lie bracket.


What I want to show:

Any element in $U(\mathfrak{g})$ can be written as a linear combination of the reduced monomials. (Not proving uniqueness now).


So the notes does this by the following steps:

Define an operator: for monomials, $A,B$ and $x,y \in X$ where $x<y$ $$ \sigma_{A,x\otimes y, B}:\otimes^*\mathfrak{g} \rightarrow \otimes ^*\mathfrak{g}$$

$$A \otimes x \otimes y \otimes B \mapsto A \otimes (y \otimes x +[x,y]) \otimes B $$ and all else $0$ on the monomial basis of $\otimes^* \mathfrak{g}$.

Define a graph: We define it recursively.

Given a tensor, $\alpha = \sum c_X X$, there is a node for each monomial in its support.

The support of a tensor is the set of monomials with nonzero coefficients in its representation.

For each node, $X$, and each operator $\sigma_{A,x\otimes y, B}$, $x<y$, there are directed edges to the support of $\sigma_{A,x\otimes y, B}(X)$. labeled $(A,x\otimes y, B)$.

Define a nodes map: For a fixed $s \in S:= \{A \otimes x \otimes y \otimes B \, : \, x < y \}$, let $r_s$ be the map which map a subset of nodes, $\Theta$, to another susbset of nodes $r_s(\Theta)$ being the target of edges labeled $s$ and source in $\Theta$.


Lemma: There are no infinite directed paths. Hence, every infinite composition of $r_s$ is stationary.


What confuses me is how the following claim follows:

Therefore any sequence of reduction (applying a reduction operator) will eventually be stationary.


The whole argument seems circular. I do not understand the purpose of introducing the graph, and how one interprets "$r_s$".

1

There are 1 best solutions below

1
On

I don't understand what your objection to the argument is, but I think you may benefit from seeing a simpler presentation of the argument. The argument you sketched is written ridiculously formally; this result is intuitively totally obvious if you spend a moment thinking about it.

Here's an example. Let's say that our basis of $\mathfrak{g}$ is $\{x,y,z\}$, orderered as $x>y>z$. Suppose we want to show that the monomial $$m=z\otimes y\otimes x$$ is a linear combination of reduced monomials. What do we do? We just use the commutator relation to repeatedly flip pairs of out-of-order basis elements until there aren't any left. So, we could start by writing $$m=y\otimes z\otimes x+[z,y]\otimes x.$$ We've now got the $y$ and $z$ in the correct order. Now we do it again with the $z$ and $x$, and then the $x$ and $y$: $$\begin{align*} m&= y\otimes x\otimes z+y\otimes[z,x]+[z,y]\otimes x \\ &= x\otimes y\otimes z + [y,x]\otimes z +y\otimes[z,x]+[z,y]\otimes x. \end{align*}$$ So, we've turned our $z\otimes y\otimes x$ into the reduced monomial $x\otimes y\otimes z$, plus a bunch of error terms involving brackets. Of course, we now need to turn those error terms into reduced monomials, but that's easy. Each bracket is some linear combination of $x,y,$ and $z$, so all the error terms turn into a linear combination of monomials of length $2$. Some of those monomials will be reduced, and some of them will be in the wrong order; for instance, we might get a term like $y\otimes x$. If we do, we can just replace $y\otimes x$ with $x\otimes y+[y,x]$. Then $x\otimes y$ is reduced, and the error term $[y,x]$ is again a linear combination of $x,y,$ and $z$. This time the error term has been reduced to a linear combination of monomials of length $1$, so these monomials are guaranteed to be reduced.

This example easily generalizes. Starting with an arbitrary monomial, if it is not reduced, repeatedly swap adjacent basis elements to make them in order (adding an error term with a bracket). Each time you do this, the number of pairs of (not-necessarily adjacent) factors in the monomial which are out of order decreases by $1$, so eventually it will be in order after you repeat this process enough times. (Alternatively, this is just the fact that the symmetric group is genereated by transpositions of adjacent elements: by repeatedly swapping adjacent pairs, you can reorder the variables in a monomial in whatever way you want.)

So, you can eventually turn any monomial into a reduced monomial, plus a bunch of error terms involving brackets. But all those error terms have lower degree than the monomial you started with, since you replaced two variables with their bracket. So, by induction on degree, the error terms can also be written as a linear combination of reduced monomials. (The base case of the induction is a monomial of degree $1$, which is always reduced.)