The definition of $K$-free algebras

152 Views Asked by At

This is Definition 10.9 of the book "A Course in Universal Algebra" by Burris and Sankappanavar (page 73, Millennium Edition).

Definition 10.9.

Let $K$ be a family of algebras of type $\mathscr{F}$. Given a set $X$ of variables, define the congruence $\theta_K(X)$ on $\mathbf{T}(X)$ by $$\theta_K(X) = \bigcap \Phi_K(X),$$ where $$\Phi_K(X) = \{\phi \in \operatorname{Con} \mathbf{T}(X) : \mathbf{T}(X)/\phi \in IS(K)\};$$ and then define $\mathbf{F}_K(\overline{X})$, the $K$-free algebra over $\overline{X}$, by $\mathbf{F}_K(\overline{X}) = \mathbf{T}(X)/\theta_K(X)$, where $\overline{X} = X/\theta_K(X)$.

I'm a second year university student reading some universal algebra for an essay, and I'm finding this definition hard to understand. What is the intersection over? Please could I also get some examples of $K$s and their $K$-free algebras? I'm also confused about $K$, can $K$ be any set of algebras with the same family of operations? What is the $K$-free algebras relationship with $K$?

Sorry for posing so many questions,

Thanks.

1

There are 1 best solutions below

1
On BEST ANSWER

Welcome to MSE! ^_^

First, what are free algebras?

There are lots of ways of thinking about free algebras, each of which is useful, but here is one which I think balances ease of understanding without watering things down too much:

You can think of a free algebra on a set $S$ as unevaluated expressions with variables coming from $S$. Given an assignment of variables to values in a "real" algebra $A$, we know how to evaluate any of these expressions to get a value in $A$.

Here's a quick (and easy) example:

Recall a Monoid is an algebra with one associative operation $\cdot$, which has a unit $1$.

The free monoid on $\{x,y,z\}$ consists of all terms like the following:

  • $x \cdot y$
  • $y \cdot x \cdot 1$
  • $x \cdot y \cdot z$
  • $z$
  • $x \cdot z \cdot y \cdot y$
  • etc.

They are the expressions you can make using $\cdot$ and $1$, as well as the variables $x,y,z$. These form a monoid in a rather stupid way. What is $(x \cdot y) \cdot (z \cdot x)$? Well... we don't have an actual way to evaluate it, but just writing them next to each other like that is itself an expression, so we get the expression $x \cdot y \cdot z \cdot x$.

In this way, by making the operation "just add in the symbols", we have created a monoid from nothing but variables. And if we give an assignment of variables to values in a monoid, say $x \mapsto 2$, $y \mapsto 5$, $z \mapsto 3$, then we can extend (in the obvious way) this assignment to our expressions:

  • $x \cdot y \mapsto 2 \cdot 5 = 10$
  • $y \cdot x \cdot 1 \mapsto 5 \cdot 2 \cdot 1 = 10$
  • $x \cdot y \cdot z \mapsto 2 \cdot 5 \cdot 3 = 30$
  • etc.

This is exactly the universal mapping property which Burris and Sankappanavar talk about, and it is this property which makes free algebras useful.

There is one small detail I'm glossing over here:

Should $x \cdot y \cdot z$ be interpreted as $x \cdot (y \cdot z)$ or as $(x \cdot y) \cdot z$?

Officially, they are two different expressions even though we know that when we evaluate $x$, $y$, and $z$ in some monoid $M$ they will end up being the same.

Similarly, $x \cdot 1$ and $x$ are technically two different expressions, even though we know they will end up meaning the same thing no matter which monoid we choose to evaluate it in.

Because $x \cdot 1 = x$ in every monoid, we won't lose our universal property if we just declare them to be equal. Similarly, we can just declare $(x \cdot y) \cdot z = x \cdot (y \cdot z)$ since we know that when we evaluate, they will be.

How do we actually do this? We define an equivalence relation on the set of all expressions with variables in $\{x,y,z\}$. We will look at the smallest equivalence relation $\sim$ which satisfies

  • $t \cdot 1 \sim t$
  • $1 \cdot t \sim t$
  • $t_1 \cdot (t_2 \cdot t_3) \sim (t_1 \cdot t_2) \cdot t_3$

for every term $t$.


Ok, now that we know what free algebras are, we want to know that they exist. In reality, we will almost never use this definition. We will show that this definition has the above property, and then we will use the universal property. Much like (almost) nobody uses the definition of the determinant, because all we really care about is that $\text{det}(AB) = \text{det}(A) \text{det}(B)$, for instance. This is all to say it isn't super necessary to fully understand this definition. You'll get it eventually, but on your first pass, it's OK if it's confusing. Burris and Sankappanavar even say as much:

There is reasonable difficulty in providing transparent description of $K$-free algebras for most $K$. However, most of the applications of $K$-free algebras come directly from the universal mapping property...

So, with that disclaimer out of the way, let's do the construction:

Let's say our algebra has function symbols $\mathscr{F} = \{f_i^n\}$, where the superscript indicates the number of inputs. Then, following the example of monoids, we start by creating a Term Algebra $\mathbf{T}(S)$. This is the set of all unevaluated expressions using $S$ as variables and functions from $\mathscr{F}$ as glue. As a concrete example, say $\mathscr{F} = \{f^3, g^1, h^0\}$ and $S = \{x,y,z\}$. Then some terms in $\mathbf{T}(S)$ might be:

  • $f^3(x,y,z)$
  • $f^3(x,g(x),y)$
  • $h^0()$
  • $g^1(h^0())$
  • $f^3(g^1(z),g^1(z),g^1(f^3(x,x,y)))$
  • etc.

Again, these are unevaluated expressions which we can evaluate once we pick values for $x,y,z$ in a specific $\mathscr{F}$-algebra.

But we don't want a universal property for any $\mathscr{F}$-algebra, we want a universal property for some class $K$ of $\mathscr{F}$-algebras. Just as we did with monoids above, we can quotient $\mathbf{T}(S)$ by the smallest equivalence relation so that $t_1 \sim t_2$ if and only if the terms are equal no matter how we evaluate them.

The slick way of saying this is to look at all the congruences $\phi$ (which, recall, is just an equivalence relation that plays nice with the $\mathscr{F}$-algebra structure) which satisfy $\mathbf{T}(X) / \phi \in K$.

Then any one $\phi$ equates some things and doesn't equate others, and we want to know what terms are equated by every $\phi$. This is because we only want to quotient those things which are true in every $K$-algebra.

But how do we look at the things equated by every $\phi$? We take their intersection. We have $\phi \subseteq \mathbf{T}(S) \times \mathbf{T}(S)$, and it is quick to see that if $\phi_i$ are all congruences, so is $\bigcap_i \phi_i$.

This explains the mysterious intersection in the definition of $\theta_K$ -- it is just the intersection of all the $\phi_i$ so that $\mathbf{T}(S) / \phi_i \in K$.

From this data, we can show what we wanted to:

  • $\mathbf{T}(S) / \theta_K \in K$
  • For any algebra $A \in K$, maps from $\mathbf{T}(S) / \theta_K$ to $A$ are given by evaluation of our variables, as before.
  • But that is exactly the universal mapping property of free $K$-algebras!

Thus we have built the desired object with our bare hands, and now we know that it exists for all varieties $K$ (as well as in slightly more generality, even). Now that we know it exists, we may use its universal property without question.


I hope this helps ^_^