What are the one-dimensional elements in the ring of symmetric functions?

163 Views Asked by At

The verification principle for $\lambda$-rings says (if I'm understanding correctly) that if you have a $\lambda$-ring $A$, and an equation using only $\lambda$-ring operations (addition, multiplication, negation, $0$, $1$, and the $\lambda^i$ -- as well as your variables of course), then the equation is true in $A$ if and only if it is true when the variables are restricted to be sums of one-dimensional elements of $A$. An element $x$ is said to be one-dimensional if $\lambda^i(x)=0$ for all $i>1$.

I seem to keep getting false results when I try to apply this. For instance, let's consider $\Lambda$, the ring of symmetric functions. The only one-dimensional elements I have been able to find are $0$ and $1$; and while I haven't done anything that would rule out others, they certainly seem to be hard to find. If these truly were the only one-dimensional element, that would make sums of one-dimensional elements just whole numbers. But there are plenty of identities that are true for whole numbers that are not true for all elements of $\Lambda$; for instance, $2\lambda^2(x)=x^2-x$ is true for whole numbers but false when $x=e_1$.

Presumably, there are some one-dimensional elements that I'm missing here. What are they? What are the one-dimensional elements in $\Lambda$? Or am I making some other error instead?

Edit: It's the latter, as Darij Grinberg points out, but an answer to the original question -- presumably a proof that $0$ and $1$ are the only $1$-dimensional elements -- would still be nice, so I'll leave the question open.

Thank you!

2

There are 2 best solutions below

0
On BEST ANSWER

Here is an answer to the question that wasn't answered in the comments:

Let $\Lambda$ be the ring of symmetric functions over $\mathbb{Z}$. This is a $\lambda$-ring. I claim that the only one-dimensional elements of this $\lambda$-ring $\Lambda$ are $0$ and $1$.

Proof. It is clear that $0$ and $1$ are one-dimensional; thus, it remains to prove that every one-dimensional element of $\Lambda$ is either $0$ or $1$. So let $u$ be a one-dimensional element of $\Lambda$.

Consider $\Lambda$ as the ring of symmetric bounded-degree formal power series in the indeterminates $x_1,x_2,x_3,\ldots$.

Theorem 9.4 in my notes on lambda-rings (applied to $K = \Lambda$, $m = 1$ and $u_1 = u$) shows that the $j$-th Adams operation $\psi^j$ of $\Lambda$ satisfies $\psi^j\left(u\right) = u^j$ for each positive integer $j$. But §16.73 in Michiel Hazewinkel's Witt vectors, part 1 shows that $\psi^j$ is the $j$-th Frobenius endomorphism $\mathbf{f}_j$ of $\Lambda$ for each positive integer $j$. Thus, for each positive integer $j$, we have

(1) $u^j = \underbrace{\psi^j}_{=\mathbf{f}_j}\left(u\right) = \mathbf{f}_j\left(u\right) = u\left(x_1^j, x_2^j, x_3^j, \ldots\right)$ (by the definition of $\mathbf{f}_j$).

Now, fix a nonnegative integer $n$ and a positive integer $j$. Then, (1) specializes to

$\left(u\left(x_1,x_2,\ldots,x_n\right)\right)^j = u\left(x_1^j,x_2^j,\ldots,x_n^j\right)$.

Now, if $a$ is any $j$-th root of unity in $\mathbb{C}$, then the polynomial $\left(u\left(x_1,x_2,\ldots,x_n\right)\right)^j = u\left(x_1^j,x_2^j,\ldots,x_n^j\right)$ does not change when one of the $x_i$ (with $i \leq n$) is multiplied by $a$. Consequently, the polynomial $u\left(x_1,x_2,\ldots,x_n\right)$ is multiplied by some $j$-th root of unity when one of the $x_i$ (with $i \leq n$) is multiplied by $a$. Since this holds for all $a$, we thus conclude that all monomials that occur in the polynomial $u\left(x_1,x_2,\ldots,x_n\right)$ (with nonzero coefficient) have their exponents congruent to each other modulo $j$ (more precisely: for each $i \leq n$, the exponents of the variable $x_i$ in all of these monomials are congruent to each other modulo $j$). Since this holds for all $j$, this yields that all of these exponents are equal. In other words, the polynomial $u\left(x_1,x_2,\ldots,x_n\right)$ has only a single monomial. By letting $n$ go to infinity, we conclude that the same holds for the symmetric function $u$. But this entails that $u$ is a constant. Now it is easy to use (1) to conclude that this constant is either $0$ or $1$. We are done.

0
On

Here is an elementary proof that even the single equation $\lambda^2(x)=0$ has as only solutions $x=0,1$ in $\Lambda\otimes\Bbb C$ (the tensor product is just to show that the integrality of coefficients is not the obstruction).

In the ring $\Lambda$ of symmetric functions, the operation $\lambda^2$ is just plethystic substitution into $e_2$, the second elementary symmetric function. Plethystic substitution of a symmetric function with non-negative integer coefficients can be defined by writing it as (infinite) sum of monomials (without coefficient, repeating a same monomial as necessary), and substituting these monomials for the (infinitely many) variables of the symmetric function in some order. The general case (with possible negative or non-integral coefficients) is defined by extending certain algebraic properties of this operation. Using such properties one can reduce everything to the case where at least one operand is a power sum symmetric function$~p_k$; note that every element of $\Lambda$ can be written uniquely as a polynomial with rational coefficients in $p_1,p_2,p_3,\ldots$. Then by application of the basic definition $p_k[x]$ and $x[p_k]$ are equal for any $x\in\Lambda$, and are the result of replacing in $x$ every monomial in the basic indeterminates by its $k$-th power.

In $\Lambda\otimes\Bbb Q$, writing $e_2=\frac12(p_1^2-p_2)$ in terms of power sum symmetric functions, the fact that plethystic substitution of a given symmetric function $x$ is a ring morphism $\Lambda\to\Lambda$ means that we are looking for solutions to $p_2[x]=p_1[x]^2$, or since $p_1[x]=x$ for any$~x$, for solutions to $p_2[x]=x^2$. But such solutions must be constants by the following argument: if not, then writing $x$ as polynomial in the $p_k$ there is a highest $k$ for which $p_k$ occurs (with nonzero coefficient); but then $p_{2k}$ occurs in the expression for $p_2[x]=x[p_2]$ (it is obtained by doubling all indices in the expression for$~x$, by applying the ring morphism property to $x[p_2]$ and using $p_n[p_m]=p_{nm}$), while is does not occur in the expression for $x^2$; this contradicts $p_2[x]=x^2$. But for constant $x$ one has $p_2[x]=x$, and the equation $x^2=x$ has as only solutions $x=0,1$.

(I don't think $x=0$ ought to be called $1$-dimensional, but that's a different matter.)