Is this an adjunction?

192 Views Asked by At

(Write $\langle S\rangle$ for the submonoid generated by $S$.)

Let $A$ denote a set and $S$ denote a subset of $A^*$ equipped with a distinguished well-ordering. Let $M$ denote a monoid and $f : S \rightarrow M$ denote a function. Then we get a corresponding function $f' : \langle S\rangle \rightarrow M,$ defined recursively as follows. Firstly, $f'(1) = 1$. Secondly, for all $x \in S$ and $y \in \langle S \rangle$, if for all $x' \in S$ and $y' \in \langle S \rangle$ satisfying $x'y' = xy$ we have $x \leq x'$, then $f'(xy) = f(x) f'(y).$ That's perhaps a little hard to understand, so let me give an example.

Let $A = \{b,c\}$ and $S = \{bc \leq b \leq c\}$. Let $M = \{a,b,c\}^*$ and define that $f:S \rightarrow M$ be given by $$bc \mapsto a, b \mapsto b, c \mapsto c.$$ Then the corresponding function $$f' : \langle bc,b,c\rangle \rightarrow M$$ just has the effect of replacing each subword of the form $bc$ with the symbol $a$. For example: $$f'(bbcc) = f(b)f'(bcc) = f(b)f(bc)f'(c) = f(b)f(bc)f(c)f'(1) = f(b)f(bc)f(c) = bac$$

(The initial motivation for this question is that the function $f'$ that replaces $bc$'s with $a$' came up naturally while solving a problem about binomial paths on the latest combinatorics assignment.)

Now, getting from $f : S \rightarrow M$ to $f' : \langle S \rangle \rightarrow M$ kind of looks like an adjunction, in the sense of category theory. I haven't been able to formalize this. So:

Question. Is this an adjunction? If so, what are the relevant categories and functors?

(Lesser question: in monoid theory, does what I'm denoting $f'$ have a name?)


Addendum. The motivation behind $f'$ was requested, so here it is.

Let $A_k$ denote the set of all $k$-length words $x$ in the alphabet $\{U,D\}$ such that for every prefix $p$ of $x,$ we have $|p|_U \geq |p|_D$. For example, $UUDD \in A_4$ has this property, because the prefixes of this word are $$\{1,U,UU,UUD,UUDD\}$$ and it can be seen that there's always at least as many $U$'s as $D$'s. (This can be visualized as a zig-zag pattern ($U$ = up and one step to the right, $D$ = down and one step to the right) in $\mathbb{R}^2$ that starts at the origin and never goes below the $x$-axis.)

Okay. Here's some numbers: $$|A_1| = |\{U\}| = 1$$

$$|A_3| = |\{UUU;UUD,UDU\}| = 3$$

$$|A_5| = |\{UUUUU;UDUUU,UUDUU,UUUDU,UUUUD;UDUDU,UUDDU,UDUUD,UUDUD,UUUDD\}| = 10$$

The problem is to show that:

Theorem. If $k$ is odd, then $|A_k| = \frac{1}{2}\binom{k+1}{(k+1)/2}.$

Our solution was as follows.

Let $B_k$ denote the set of all $k$-length words $x$ in the alphabet $\{U,D\}$ such that $|x|_U = |x|_D$ and $x_0 = D$. (This can be visualized as a length $2k$ zig-zag pattern that starts at the origin, whose first step is $D$ and which ends somewhere on the $x$-axis).

For $k$ odd, $B_k$ is empty.

But if $k$ is even, we have:

$$|B_k| = \frac{1}{2}\binom{k}{k/2}$$

Now let $k$ denote a fixed but arbitrary odd number. We're going to define a function $$\varphi:A_k \rightarrow B_{k+1}.$$ If this function $f$ can be shown to be a bijection, then we can argue as follows:

$$|A_k| = |B_{k+1}| = \frac{1}{2}\binom{k+1}{(k+1)/2}$$

which proves the result.

Informally, our proposed $\varphi$ was as follows:

  • Consider $x \in A_k$.
  • Change $x$ by adjoining a $D$ symbol on the left. Note that the result is necessarily an element of $A_{k+1}$, due to the oddness of $k$.
  • Now go through and start changing $U$'s into $D$'s from left-to-right, but with the condition that patterns of the form $UD$ are skipped over; they're not to be touched. Stop when the number of $D$'s is large enough that it equals the number of $U$'s, which is now smaller.
  • Call the result $\varphi(x)$.

Here's an illustration of $\varphi$ for $k=5$, for example. The first arrow indicates adjoining a $D$ to the beginning of the word, and the second arrow indicates the rest of the process.

$$UUUUU \mapsto DUUUUU \mapsto DDDUUU$$

$$UUUUU \mapsto DUDUUU \mapsto DUDDUU$$

$$UUUUU \mapsto DUUDUU \mapsto DDUDUU$$

$$UUUUU \mapsto DUUUDU \mapsto DDUUDU$$

$$UUUUU \mapsto DUUUUD \mapsto DDUUUD$$

$$UUUUU \mapsto DUDUDU \mapsto DUDUDU$$

$$UUUUU \mapsto DUUDDU \mapsto DUUDDU$$

$$UUUUU \mapsto DUDUUD \mapsto DUDUUD$$

$$UUUUU \mapsto DUUDUD \mapsto DUUDUD$$

The question then becomes: how can we formally define $\varphi$?

Let $A$ denote the set $\{U,D\}$ and $S$ denote the poset $\{UD\leq U\leq D\}$. Define $f : S \rightarrow \{A,U,D\}^*$ as follows: $$f(UD) = A,f(U)=U,f(D)=D$$

Then we get a corresponding function $$f' : \{U,D\}^* \rightarrow \{A,U,D\}^*$$ by applying the construction under question. It follows that $f'$ has the effect of removing $UD$ patterns and replacing them with $A$'s. For example: $$f'(DUDUUU) = Df'(UDUUU) = DAf'(UUU) = \cdots = DAUUUf'(1) = DAUUU.$$ Once we've got the word in this form, it becomes easy to define the function that replaces $U$'s with $D$'s without touching $UD$'s, because we've replaced all the $UD$'s with $A$'s. Omitting certain details that the reader can easily work out for themselves, the full chain for $UDUUU \in A_5$ ends up looking something like this:

$$UDUUU \mapsto DUDUUU \mapsto DAUUU \mapsto DADUU \mapsto DUDDUU$$

1

There are 1 best solutions below

3
On

the categories

Let be the category of well-ordered subsets of words and monotonic functions. That is, an object is a triple (Symbols, Words, _⊑_) consisting of a set Symbols, a subset Words of Symbols* ─the sequences on Symbols─, and a well-order _⊑_ on Words; and a morphism between such objects is a function between the word sets that is monotonic.

Note: by a well-order we mean a partial order where every non-empty subset has a least element.

Let be the category of well-ordered monoids and monotonic monoid homomorphisms. More specifically, the objects are monoids whose carrier set is well-ordered and the monoid operation is monotonic wrt the order.

a forgetful functor

By forgetting about the monoidal structure and construing the monoid carrier as both the set of symbols and the distinguished subset of words ─namely singleton words─ we obtain a well-ordered subset object. Likewise forgetting the operation preservation of morphism and maintaining only its order preservation gives us a morphism. Whence we have a “forgetful functor” that yields the nderlying well-ordered subset:

 :  ⟶ 
 (M, ⊕, 0, ⊑) ≔ (M, M, ⊑)
 g ≔ g

a free functor

Given a object, the free monoid construction gives us the monoid of “sentences”; ie, sequences of words. An order on this monoid is obtained lexicographical from the order on words; we will use the same order symbol for words and sentences.

It seems we have just defined the object portion of a functor,

 :  ⟶ 
 (symbols, words, ⊑) ≔ (words*, catenation, empty-word, component wise ⊑)
 f ≔ ???

How do we go about doing so for the morphisms? This is where OP's weird condition comes in! ─but first some helper functions …

Let tail : sentences → sentences be the function that discards the first word of a sentence. Let head : sentences → words be the function that returns the smallest word starting a sentence.

Remember that a sentence is just a sequence of symbols, some subsequences of which are special and called words. The two aforementioned functions can be defined. Try it before you read on ;)

Since the words are well-ordered, every non-empty subset has a least element and so head may be defined as the least prefix word of a sentence:

head sen ≔ min { p ∈ words ∣ p is a prefix of sen }

For example, OP's example that is, with words = {bc ⊑ b ⊑ c} ⊆ {a,b,c}* = symbols*, we have

  head bc
=⟪ definition of head ⟫
  min {p ∈ S | p is a prefix of bc}
=⟪ possible prefixes ⟫
  min {b , bc}
=⟪ order ⟫
  bc

Notice that head abc = min{a} = a and it is not head abc = min {a, ab, abc} since we are not considering all prefixes but only those specially marked as "words".

Note that the head of a sentence may not exist, since the set in its definition may be empty.

With this in hand, tail sen can be defined as the original sentence sen less the prefix head sen. Like head, this is also a partial function.

Before we move on, let us name a fact for ease of reference:

“Decomposition”:
every sentence is either empty or the catenation of its head followed by its tail.

This allows us to define a function on words by specifying it on two disjoint cases.

For example, given a -morphism f : → , we can obtain a -morphism f : → as follows.

 f empty-word = empty-word
 f sen = f (head sen) ·  f (tail sen)  ─where the · denotes catenation

Note the typing,

f : words of  → words of T  ⇒   f : sentences of  ⟶ sentences of 

This is clearly a monoid homomorphism, that it preserves order follows from f preserving order.

verifying freeness

Now that we have all the pieces, it would be immoral not to formally justify calling and a forgetful-free functor pair. Let's do that now so that our conscious is at ease then return to the main problem afterwards ─then again who says we've left it ;)

Simply put, a free object is initial and so for any -object and a -object = (N, ⊕, 0, ⊑), every -morphism f : → gives rise to a “unique” -morphism :

〖f〗 empty-word = 0
〖f〗 sen = f (head sen) ⊕ 〖f〗(tail sen)

Uniqueness: for any morphism g : ⟶

〖 f 〗= g  ≡  f w = g w , for all words w of 

Proof: let w be an arbitrary word of ,

  〖f〗 = g
⇒⟪ application ⟫
  〖f〗 w = g w
≡⟪ definition of 〖f〗 ⟫
   f w = g w

We prove the converse by induction, using the decomposition theorem. Assuming, f w = g w, for all words w, we have empty case: 〖f〗 empty-word =⟪ 〖f〗 is a monoid homormophism ⟫ 0 =⟪ g is a monoid homormophism ⟫ g empty-word

and inductive case:

  〖f〗 s
 =⟪ definition of 〖f〗 ⟫
   f (head s) ⊕ 〖f〗 (tail s)
 =⟪ inducitive hypothesis ⟫
   f (head s) ⊕ g (tail s)
 =⟪ assumption ⟫
   g (head s) ⊕ g (tail s)
 =⟪ g is monoid homomorphism ⟫
   g (head s ∙ tail s)  ─recall ∙ denotes catenation
 =⟪ decomposition ⟫
   g s

Hence, this theorem shows that “certain” set functions give rise to unique homomorphisms. What about the converse …

Given a morphism g : ⟶ , we can obtain an obvious morphism by construing words as one-word sentences; let us denote this operation by ι g.

The typing suggests that this may be an inverse of the 〖_〗 operation. Well, for any f : → morphism (which is only sensical on words), the phrase “f w = g w , for all words w of ” is more compactly expressed as f = ι g. Wait, now the uniqueness proof of earlier takes the shape

〖 f 〗= g  ≡  f = ι g

But wait! Moving an operation from one side of an equality to the other a la another operation means that the operations are inverses, whence ι = 〖_〗⁻¹. Thus, ( ⟶ ) ≅ ( ⟶ ).

Moreover, since no unusual choices were made at any point in the construction of the inverses, we have that this is a natural isomorphism ─as one ought to verify formally.

But this means we have an adjunction ! :-)

closing remarks

Hope this helps; there's probably errors ─devil's in the details.

I'm interested in where this “naturally came up”; would you please link to the article online or explain the scenario. Thanks!

Also, here's some things to wonder about:

• I originally used the category of monoids and homomorphisms, without well-orders, and used the prefix order: a monoid operation induces a “prefix order” x ◁ y iff there's a w with x ⊕ w = y. When is this a well-order?

• Why were the definitions of f and 〖f〗 so similar?

• Let be the category of sets paired with a distinguished subset and morphisms just functions between the distinguished subsets. Let be the category of monoids and monoid homomorphisms.

Is there an operation that takes sets-with-structure categories to categories such that () = and () = ─note that the left-hand sides are application of the operation and the right hand sides are the categories defined at the very beginning.

Is such an operation “natural”? Can it be defined on arbitrary concrete categories?