Constants in a signature

148 Views Asked by At

This is my first post so I hope it works! Taking the axioms for a group as an example, the literature defines a group in (at least) two different ways:

Method 1

A signature of $(G,\circ,\,^{-1})$ and axioms

  1. Associativity. $g_{1}\circ(g_{2}\circ g_{3})=(g_{1}\circ g_{2})\circ g_{3}$.
  2. Identity. $\exists e\in G,e\circ g=g$.
  3. Inverse. $g^{-1}\circ g=e$.

Method 2

A signature of $(G,\circ,\,^{-1},e)$ and axioms

  1. Associativity. $g_{1}\circ(g_{2}\circ g_{3})=(g_{1}\circ g_{2})\circ g_{3}$.
  2. Identity. $e\circ g=g$.
  3. Inverse. $g^{-1}\circ g=e$.

The difference between the two is the definition of the constant $e$.

So is there any mathematical difference between these methods. I presume that all theorems provable under method 1 are provable under method 2. Is that correct? If so, in general, is it the case that all constants can be excluded from signatures and introduced with an existential clause in an axiom?

3

There are 3 best solutions below

2
On

For practical purposes, no.

$(G,\circ, ^{-1},e)$ is a definable extension of $(G,\circ, ^{-1})$. You can defined the identity in $(G,\circ, ^{-1})$ via $\varphi(x) \equiv (\forall y)(y \circ x = x \circ y = y)$ and so for almost all intents and purposes, these structures/languages are the same.

(In general, one difference between a language and it's definable extensions can be quantifier elimination. For instance, $Th(\mathbb{R}, +, \times, 0,1)$ does not have Q.E. but the definable expansion $Th(\mathbb{R}, +, \times, 0,1,<)$ does)

1
On

If you consider monoids instead of groups, there will be a difference. The substructures of monoids with a distinguished identity are monoids, but without them they will usually be just plain semigroups. Consider, for example, the substructure of $\langle\mathbb{Z},+\rangle$ with universe the positive integers $\mathbb{N}$.

5
On

As mentioned in Kyle's answer, in the case you specifically asked about, it doesn't make much difference whether you include the identity element in your list of operations.

However, generally speaking, if you single out a special element of your universe and include it (as a constant, i.e. "nullary", operation) among your set of basic operations, it may turn out to have significant consequences for the equational theory of the algebra. To see why, think about the set of equations you can write down which may or may not hold for this algebra. This set may become larger if you give a name to some "special" element of the universe.

This is most strikingly illustrated in the 1982 paper [1] in which Roger Bryant proves that if you give a name to another element of a group, then the equational theory of the resulting algebra need not be generated by a finite set of equations. In contrast, Oates and Powell in [2] proved that the equational theory of a finite group (without any specially named elements) is "finitely based".

...but let's not dwell on this deep equational stuff which can be very hard to make peace with.

On a much lighter note, there's another way to think about a nullary operation if you're a computer programmer---it's a thunk! That is, it's an element of the universe that is posing as a function. This has important consequences for programming because it can determine when an expression will be evaluated. (Functions and higher types are usually passed around using call-by-name, unlike primitive types which are usually passed call-by-value.) Not sure how helpful it is to make that connection, but anyway I think it's kind of neat.

[1] “The laws of finite pointed groups” R. M. Bryant, Bull. London Math. Soc, 14 (1982), 119-123.

[2] “Identical relations in finite groups,” S. Oates, M. B. Powell, J. Algebra (1965).