Help understanding why finite Boolean rngs must be rings

183 Views Asked by At

I'm working through the exercises in Introduction to Boolean Algebra by Halmos and Givant. Looking to show the following, an exercise from the first chapter: every finite Boolean rng must have a unit.

Halmos and Givant attribute this observation to Stone, in his original paper on the representation theorem. Stone shows the result by explicitly constructing the unit as a sum of elementary symmetric polynomials. For a finite Boolean ring $B$ with $|B| = n$, we consider the elementary symmetric polynomials in $n$ variables, which are:

$e_1(x_1, \dots, x_n) = x_1 + \dots + x_n$,

$e_2(x_1, \dots, x_n) = x_1 x_2 + \dots + x_{n-1}x_n$, ...

$e_n(x_1, \dots, x_n) = x_1\cdots x_n$

Stone shows that the unit in a finite Boolean rng (with $n$ elements) is given as the sum of $e_1(a_1, \dots, a_n) + \dots + e_n(a_1, \dots, a_n)$ where $\{a_1, \dots, a_n\} = B$.

What I'm wondering, is how one could figure this out? What observations about Boolean rings do I need to make in order to know that the above construction is the right one to try?

As a side note, I see that in this post, Martin Brandenburg gives another argument for the conclusion using the Nakayama lemma (which seems a little high-powered for this exercise) and mentions that a direct argument is possible.

3

There are 3 best solutions below

2
On BEST ANSWER

Let me first give a much more intuitive argument for why a finite Boolean rng is a ring, and then relate it to Stone's construction. Any Boolean rng is in particular a lattice (bounded below but not necessarily above), with the lattice operations defined by $$x\wedge y=xy$$ and $$x\vee y=x+y+xy.$$ Since multiplication is the same as the lattice join operation, a multiplicative identity is the same thing as a greatest element of the lattice. But now the conclusion is obvious: in a finite lattice, you can find a greatest element by just taking the join of all the elements.

Now where does Stone's construction come in? It turns out that if you translate an $n$-fold join back into the rng operations, you just get the sum of the elementary symmetric polynomials. This is clear for $n=2$ from the definition: $$x\vee y=(x+y)+xy=e_1(x,y)+e_2(x,y).$$ If you calculate out what you get for $n=3$, it is not hard to guess that it works for any $n$, and then it's also not hard to prove it by induction on $n$.

This is also closely related to inclusion-exclusion (which is another way you could come up with it): to count each element of an $n$-fold union $x_1\cup\dots \cup x_n$ exactly once, inclusion-exclusion tells you to first count each element of each of $x_1,x_2,\dots,x_n$ once, then subtract each element of each binary intersection, then add in each element of each ternary intersection, and so on. In other words, if you let $1_x$ denote the characteristic function of a set $x$, then the equation $$1_{x_1\cup\dots \cup x_n}=e_1(1_{x_1},\dots,1_{x_n})-e_2(1_{x_1},\dots,1_{x_n})+\dots+(-1)^{n-1}e_n(1_{x_1},\dots,1_{x_n})$$ is true (say, in the ring of all integer-valued functions on your universal set). Taking this mod $2$ so you are working with $\mathbb{F}_2$-valued functions which you can identify with the power set Boolean ring, this says exactly that $$x_1\vee\dots\vee x_n=e_1(x_1,\dots,x_n)+e_2(x_1,\dots,x_n)+\dots+e_n(x_1,\dots,x_n).$$ (Of course, this does not prove that this identity is true in an abstract Boolean rng unless you already have something like the representation theorem, but it's a way you might come up with it as a guess.)

Yet another way you can come up with this formula: the join operation is just what you get by conjugating the meet operation by the complement operation (of course this only makes sense when you have a unit). In terms of ring operations, the complement operation is $x\mapsto 1+x$, so this says that $x_1\vee \dots \vee x_n$ should be given by $$1+(1+x_1)(1+x_2)\dots(1+x_n).$$ When you expand out that product you'll just get the sum of all possible products of some of the $x_i$'s, and then when you add $1$ you'll remove the empty product so you're just left with the sum of the elementary symmetric polynomials. (If you use the same idea but with integer-valued functions where the complementation operation is $x\mapsto 1-x$, this gives a slick proof of inclusion-exclusion.)

0
On

Inspired by Eric Wofsey's last comment, I think the best way to carry this argument out is to reason via the complementation operation in the unitization of a Boolean rng. A previous exercise was to show that this always exists.

So let $B$ be some finite non-degenerate Boolean ring and consider its unitization $\mathbb{Z}_2 \oplus B$. Then observe that for each $a_j \in B$, $a_j + a_j = a_j + a_j^2 = a_j(1 + a_j) = 0$. So consider the product $$ \prod_{i = 1}^n (1 + a_i) $$

Then for each $a_j$, we have $a_j(\prod_i (1 + a_i)) = 0$, and so we also have, for each $a_j \in B$ $$ a_j\left[1 + \prod_{i = 1}^n (1 + a_i)\right] = a_j $$ Observe that $\prod_{i} (1 + a_i) = 1 + p(a_1, \dots, a_n)$, where $p$ is some non-monic polynomial. Since $p$ is not monic, it follows that $p(a_1, \dots, a_n)$ is an element of $B$. But then $$ \left[1 + \prod_{i = 1}^n (1 + a_i)\right] = (1 + 0) + (1 + p(a_1, \dots, a_n)) = (0 + p(a_1, \dots, a_n)) $$ So $p(a_1, \dots, a_n)$ is the unit in $B$.

0
On

This is related to the lattice-theoretic approach outlined in other solutions, but I feel like it is more elementary and doesn't require complete translation to meets and joins.

First note that $eR$ is a ring with identity $e$ for any $e\in R$. (Of course, commutativity is in play in a boolean rng.)

The thing is that if $f\notin eR$, then $g=e+f+ef$ is an element such that $gR\supseteq (f, eR)$.

What has happened? Starting at any $eR$, we've shown that if $eR\neq R$, then $eR$ is contained in a larger ring $gR$ with identity $g$.

Of course, since $R$ is finite, this can't go on forever: at some point, $gR=R$, and $R$ has identity $g$.