Why are trace monoids cancellative?

143 Views Asked by At

Trace monoids are just partially commutative free monoids. In other words, this is the set of words where words which can be obtained by switching around certain pairs of letters are considered equivalent.

Formally, take a set of symbols $\Sigma$ and a relation (called the independence relation) $I \subseteq \Sigma \times \Sigma$ which is symmetric. Define a relation $\sim$ on the set of words $\Sigma^*$, so that $x \sim y$ if there exists $(a, b) \in I$ and $u_1, u_2 \in \Sigma^*$ such that $x = u_1 a b u_2$ and $y = u_1 b a u_2$. Let $\equiv$ be the transitive reflexive closure of $\sim$. We define the traces on $\Sigma$ induced by $I$ to be the quotient of $\Sigma^*$ under $\equiv$.

One can show that concatenation of $\Sigma^*$ is stable under $\equiv$, and thus $\Sigma^*/\equiv$ has a natural monoid structure.

I want to show that this monoid structure is cancellative. So, I want to show that $xy \equiv xz \implies y \equiv z$.

According to the discussion near (1.7) here, this is clear for $\sim$ and hence is clear for $\equiv$. I do not understand why the later part of their claim is straightforward.

Any help with this proof is appreciated.

3

There are 3 best solutions below

0
On BEST ANSWER

After some thought, I found an elementary proof of this myself.


Let us focus on left cancellativity. Right cancellativity should be symmetric.

It is enough to prove the following:

$$\forall a \in \Sigma, x, y \in \Sigma^*, a \cdot x \equiv a \cdot y \implies x \equiv y \qquad (1)$$

Left cancellativity can be proved from this lemma using a simple induction on the term on the left.


To prove (1), we will show the following:

Lemma (2) : As before, say $a \in \Sigma$, $y, x' \in \Sigma^*$ and that $ax' \equiv y$. Then, (a) $y$ can be decomposed (written) in the form $x_0 \cdot a \cdot x_1$ where $x_0, x_1 \in \Sigma^*$. So that, (b) $a$ does not occur in $x_0$ and (c) $x_0 \cdot x_1 \equiv x'$. Also, (d) $a$ commutes with every symbol in $x_0$.

(Note that (d) readily means that $a$ commutes with the word $x_0$ itself.)

Now, let us see why (2) implies (1). Say there is some word $a \cdot x = a \cdot y$ as in the hypothesis of (1). Using (2), we see that there is a way to decompose $a\cdot y$ into $x_0 \cdot a \cdot x_1$ following the conditions. Since $x_0$ cannot have $a$ in it, it must be empty, which means $x_1 = x$. By condition (c), we have $y \equiv x_0 \cdot x_1 = x_1 = x$, like we wanted.


Lemma (2) looks straight forward by visual inspection. Indeed, it can be proven directly by induction on the proof of $ax' \equiv y$.

I have, however found that choosing the correct induction principle here can be tricky. I recommend using the following induction principle.

Fix an element $x \in \Sigma^*$. Let $P$ be a property on elements of $\Sigma^*$. Now suppose conditions $(*)$ and $(**)$ below hold $$\forall y, [x \sim y \implies P y] \qquad (*)$$ $$\forall y \; z, [x \equiv y \land P y \land y \sim z \implies P z] \qquad (**)$$ Then, the for every $y$ such that $x \equiv y$, it holds that $P y$.

Showing (**) in our case requires us to think of the situation where we have $x_0 \cdot a \cdot x_1 \sim z$. This involves a bit of case work, where we inspect where the transposed pair is located. This is slightly tedious, but can be done.


I have checked this proof with Coq, so I believe this should be correct. You may need to assume that the independence relation is irreflexive, or that for all $x, y \in \Sigma$, either $x = y$ or $x \neq y$.

3
On

Edit: my previous answer was simply wrong. However, I have come up with a correct proof. This proof actually provides quite a bit of insight into trace monoids, but it is unfortunately rather lengthy.

Outline of the proof:

  1. Construct an equivalence relation $\simeq$ based on the idea that $x \equiv y$ iff $y$ is some permutation of $x$ that satisfies certain properties.
  2. Show that $\simeq$ respects $\sim$; then $x \equiv y$ implies $x \simeq y$.
  3. Show that $x \simeq y$ implies $x \equiv y$.
  4. Show that $xy \simeq xz$ implies $y \simeq z$.
  5. Conclude that $xy \equiv xz$ implies $y \equiv z$.

Again, I assume that $\equiv$ is defined to be an equivalence relation (not merely a transitive closure, since as defined $\equiv$ is not reflexive).

Write $[n] = \{x \in \mathbb{N} : 1 \leq x \leq n\}$.

Suppose that $x, y \in \Sigma^*$ both have length $n$. We say $f : [n] \to [n]$ is a "witness of equivalence" if

  1. $f$ is a bijection.
  2. For every $i \in [n]$, $x_i = y_{f(i)}$
  3. For every $a, b \in [n]$ s.t. $a < b$ and $f(a) > f(b)$, $(x_a, x_b) \in I$

We notate this situation as $f : x \simeq y$ (read as "f is a witness of the equivalence of $x$ and $y$").

Lemma 1: whenever $x$ is of length $n$, we have $id_n : x \simeq x$. Proof: immediate.

Lemma 2: whenever $f : x \simeq y$, $f^{-1} : y \simeq x$. Proof: We take $n$ to be the common length of $x$ and $y$. (1) clearly, $f^{-1} : [n] \to [n]$ is defined since $f$ is a bijection, and $f^{-1}$ is clearly bijective. (2) Suppose $i \in [n]$. Then $x_{f^{-1}(i)} = y_{f(f^{-1}(i))} = y_i$. (3) Suppose we have $a, b \in [n]$ s.t. $f^{-1}(a) > f^{-1}(b)$. Then we have $f^{-1}(b) < f^{-1}(a)$ and $f(f^{-1}(b)) = b > a = f(f^{-1}(a))$. Thus, we have $(x_{f^{-1}(b)}, x_{f^{-1}(a)}) \in I$. Note that $x_{f^{-1}(w)} = y_w$ for all $w \in [n]$; then $(y_b, y_a) \in I$. By symmetry, $(y_a, y_b) \in I$.

Lemma 3: whenever $f : x \simeq y$ and $g : y \simeq z$, we have $g \circ f : x \simeq z$. Proof: let $n$ be the common length of $x$, $y$, $z$. (1) The composition of two bijections is clearly a bijection. (2) We have $z_{g(f(i))} = y_{f(i)} = x_i$ for all $i \in [n]$. (3) Suppose we have $a, b \in [n]$, $a < b$, $g(f(a)) > g(f(b))$. Note that since $f$ is a bijection, we have either $f(a) < f(b)$ or $f(a) > f(b)$. Case $f(a) > f(b)$: then $(x_a, x_b) \in I$. Case $f(a) < f(b)$: then $(x_a, x_b) = (y_{f(a)}, y_{f(b)}) \in I$.

Lemma 4: suppose we have $(a, b) \in I$, $x = u_1 ab u_2$, and $y = u_1 ba u_2$, with $n$ being the length of $u_1$ and $m$ being the length of $u_2$. Then the permutation $g : [n + m + 2] \to [n + m + 2]$ defined by $g(n + 1) = n + 2$, $g(n + 2) = n + 1$, $g(x) = x$ whenever $n + 1 \neq x \neq n + 2$ is a witness to $x \simeq y$.

Proof of Lemma 4: (1) Clearly, $g$ is a bijection. (2) This is immediate by definition of $g$ and by $x = u_1 ab u_2$, $y = u_1 ba u_2$. (3) Suppose $w, z \in [n + m + 2]$, $w < z$, $g(w) > g(z)$. Then it must be that $w = n + 1$, $z = n + 2$. Then $(x_w, x_z) = (a, b) \in I$.

Now we abusively write $x \simeq y$ to indicate that there exists $f : x \simeq y$.

Lemma 5: let $g : x \simeq y$ where $x$ and $y$ have length $n > 0$. Suppose $g(1) = 1$. Then write $x = ax'$ and $y = ay'$. Define $h : [n - 1] \to [n - 1]$ by $h(x) = g(x + 1) - 1$. Then $h : x' \simeq y'$.

Proof: (1) firstly, $h$ is well-defined because it cannot be that $g(x + 1) = 1$, since $x > 1$; therefore, $g(x + 1) - 1 \in [n - 1]$. Secondly, $h$ is clearly a bijection because it is the composition of three bijections ($g$, adding 1, and subtracting 1). (2) Clearly, we have $x'_i = x_{i + 1} = y_{g(i + 1)} = y'_{g(i + 1) - 1} = y'_{h(i)}$ for all $i \in [n - 1]$. (3) Suppose we have $a, b \in [n - 1]$ with $h(a) > h(b)$. Then $g(a + 1) > g(b + 1)$ and $a + 1 < b + 1$; then $(x'_a, x'_b) = (x_{a + 1}, x_{b + 1}) \in I$.

Lemma 6: let $g : x \simeq y$ such that $g(1) > 1$. Then there exists $h : x \simeq z$ s.t. $h(1) = g(1) - 1$ and $z \sim y$, with $h$ and $z$ constructed in the proof.

Proof: Consider the unique $w$ such that $g(w) = g(1) - 1$. Since $g(w) \neq g(1)$, we have $w \neq 1$ and thus $1 < w$. We also have $g(1) > g(1) - 1 = g(w)$. Therefore, $(x_1, x_w) \in I$. Write $y = u_1 x_w x_1 u_2$ where $u_1$ is of length $g(1) - 2$. Define $z = u_1 x_1 x_w u_2$. Let $h : y \simeq z$ be as described in lemma 4. Then by lemma 3, $h \circ g : x \simeq z$. And we have $(h \circ g)(1) = g(1) - 1$.

Lemmas 1, 2, and 3 demonstrate that $\simeq$ is a reflexive, symmetric, and transitive relation; thus, an equivalence relation. Lemma 4 demonstrates that whenever $x \sim y$, $x \simeq y$. Therefore, $\equiv$ is a subset of $\simeq$; that is, whenever $x \equiv y$, we have $x \simeq y$. We wish to show that $\simeq$ and $\equiv$ are actually the same relation. To do this, we must show that $x \simeq y$ implies $x \equiv y$.

Claim: for every $n \in \mathbb{N}$, for every $x, y$ of length $n$ and $f : x \simeq y$, we have $x \equiv y$.

Proof: we proceed by induction on $n$.

Case $n = 0$: then $x = y = $ the empty word; then $x \equiv y$.

Case $n = k + 1$: then we proceed by induction on $f(1)$.

Case $f(1) = 1$: in this case, we write $x = ax'$, $y = ay'$ and apply Lemma 5 to conclude that we have $x' \simeq y'$. Since $x'$ and $y'$ have length $k$, we may apply the inductive hypothesis to conclude $x' \equiv y'$. Since you've already shown that concatenation respects $\equiv$, we may conclude $x = ax' \equiv ay' = y$.

Case $f(1) = j + 1$: in this case, we produce $z$ and $h : x \simeq z$ s.t. $z \sim y$ and $h(1) = j$. Then by the inductive hypothesis, $x \equiv z$. Since $z \sim y$, we have $z \equiv y$. By transitivity, $x \equiv y$.

This completes our proof that $\simeq$ and $\equiv$ are equivalent.

With that, we prove one final Lemma.

Lemma 7: suppose $ax \equiv ay$. Then $x \equiv y$. Proof: this is equivalent to saying that whenever $ax \simeq ay$, we have $x \simeq y$. Suppose we have some $f : ax \simeq ay$, and let $n$ be the common length of $x$ and $y$. We wish to demonstrate that $x \simeq y$. We proceed by induction on $f(1)$.

Case $f(1) = 1$: then we apply Lemma 5.

Case $f(1) = k + 1$: then take $z$, $h : ax \simeq z$, $z \sim y$ as described in lemma 6 s.t. $h(1) = k$. If $k = 1$, then we may conclude that $y = z$ by analysing the construction of $z$ in Lemma 6 and cite the inductive hypothesis to finish the proof. Otherwise, we note that we may write $z = az'$ and that $h(1) = 1$; then by Lemma 5, we have $y' \simeq z'$. By the inductive hypothesis on $f(1)$, we have $x' \simeq z'$. Then $x' \simeq y'$.

Lemma 7 is proved.

We can now show that $xy \equiv xz$ implies $y \equiv z$. We proceed by structural induction on $x$.

Case $x$ empty: trivial.

Case $x = ax'$: then we have $ax'y \equiv ax'z$. By Lemma 7, $x' y \equiv x' z$. By the inductive hypothesis, $y \equiv z$.

QED.

2
On

Here is a proof based on the Projection Lemma, taken from [1, Proposition 1].

I will use $A$ for the alphabet (which is easier to type than $\Sigma$). For each subset $B$ of $A$, let $p_B: A^* \to B^*$ be the natural projection, which is the monoid morphism defined by $p_B(a) = a$ if $a \in B$ and $p_B(a) = 1$ otherwise. If $b \in A$, then we write $p_b$ for $p_{\left\{b\right\}}$.

Projection Lemma. Let $u, v \in A^*$. Then $u \sim v$ if and only if $p_a(u) = p_a(v)$ for all $a \in A$ and $p_{\{a,b\}}(u) = p_{\{a,b\}}(v)$ for all $(a,b) \in A^2 - I$.

Proof. The condition is clearly necessary. We show it is sufficient by induction on the common length $n$ of $u$ and $v$. If $n = 0$ or $1$, the result is trivial. Suppose that $n \geqslant 2$ and let $u = au'$ with $a \in A$. Since $p_a(u) = p_a(v)$, one has $p_a(v) \neq 1$. Writing $v$ as $v'av''$ with $p_a(v') = 1$, let us show that $av' \sim v'a$. This is clear if $v' = 1$. Otherwise, let $b$ be any letter of $v'$, which is necessarily distinct from $a$, since $p_a(v') = 1$. Then the first letter of $p_{\{a,b\}}(u)$ is $a$, but the first letter of $p_{\{a,b\}}(v)$ is $b$. Consequently, $(a,b) \in I$. So all the letters of $v'$ commute with $a$ and hence $av' \sim v'a$.

Consequently, $v \sim av'v''$. Let $c, d \in A$ be such that $(c, d) \notin I$. If $a \notin \{c, d\}$, then $$ p_{\{c,d\}}(v'v'') = p_{\{c,d\}}(v) = p_{\{c,d\}}(u) = p_{\{c,d\}}(u') . $$ On the other hand, if $a \in \{c, d\}$, say $c=a$, then $p_{\{c,d\}}(v')= 1$ in light of the above and thus : $$ p_{\{a,d\}}(v'av'') = ap_{\{a,d\}}(v'') = ap_{\{a,d\}}(v'v'') = ap_{\{a,d\}}(u'), $$ whence $p_{\{a,d\}}(v'v'') = p_{\{a,d\}}(u')$.

Since moreover $p_b(v'v'')= p_b(u')$ for all $b \in A$, one gets $v'v'' \sim u'$ by the induction hypothesis. Thus $u=au' \sim av'v'' \sim v'av'' \sim v$.

Corollary. Every partially commutative free monoid is a submonoid of a direct product of free monoids.

Corollary. Every partially commutative free monoid is cancellative.

[1] R. Cori and D. Perrin, Automates et commutations partielles. (French) RAIRO Inform. Théor. 19 (1985), no. 1, 21--32.