I'm interested in sets whose subsets produce unique values under a given commutative operation. I don't know what this is called. I tried searching for terms like unique commutative invariant, but didn't find much. Here are some examples.
Primes under multiplication
By the fundamental theorem of arithmetic, we know any real integer can be represented by a unique product of primes. The reverse holds that the product of a set of primes will produce an integer uniquely.
For example, $\{2, 3, 5\}$ is the only subset of the primes whose product is 30. $$\prod\limits_{x\in\{ 2, 3, 5\}}x = 30$$
Constant-recursive sum under addition
This one, or perhaps the below, is used from time to time in software development; think chmod.
$$f(n, k) = {k + \sum\limits_{i=0}^{n-1}{f(i, k)}}$$
$$f(x, 1) \in \{1, 2, 4, 8, \ldots, 2^n, \ldots\}$$
$2^n$ under binary OR and addition
$2^n + k - 1$ is a different way of expressing the above. However, when $k=1$, we can additionally note that the unique commutative invariance holds under binary OR ($|$). ($|$ is not commutative for an arbitrary set of integers. But, it is for $2^n$. )
$$ \{0001, \ldots, 1000, \ldots\} $$ $$ \{0010, 1000\}\in\{0001, \ldots, 1000, \ldots \} $$ $$ |\{0010, 1000\} = 1010 $$
What are sets like the above called? Is there an index of these that I can take a look at? Are there any recommended introductory readings?
I don’t really understand your constant recursive sum example since you are using two different functions called $f$, one taking two inputs and the other taking 1.
But your other two examples are actually critically different.
Your first example is that any positive integer can be represented uniquely as a product of primes.
Your second example is that any vector of $n$ bits can be represented uniquely as the bit-wise $\lor$ of unit vectors.
But phrasing things this way obscures something critical; we are using the term “represented uniquely” differently.
In the first case, the theorem is actually that for any positive integer $k$, there is a unique finite multiset $S$ of primes such that $k = \prod\limits_{s \in S} s$.
In the second case, our theorem is that for any $n$-bit vector $v$, there is a unique finite set $S$ of unit vectors such that $S = \bigvee\limits_{s \in S} s$.
Note the key difference - the first involves a unique multiset, while the second involves a unique set. We cannot swap out “set” for “multiset” in either theorem, since $(1, 0) = (1, 0) \lor (1, 0)$, for instance, while $4$ cannot be written as the product of a set of primes.
To unify the two, we must discuss the notion of commutative monoids and their coproducts.
A monoid is a triple $(M, \cdot, e)$ where $M$ is a set, $\cdot$ is a binary operator $\cdot : M^2 \to M$ (written with infix notation - $x \cdot y$ instead of $\cdot(x, y)$), and $e \in M$, such that
Abuse of notation notes: Typically, we will write a monoid as $(M, \cdot)$ (or “the set $M$ under the operation $\cdot$“). This is because if $(M, \cdot, e_1)$ and $(M, \cdot, e_2)$ are both monoids, then $e_1 = e_2$. We will also abusively write the monoid as $M$. Finally, because of associativity, we will typically drop parentheses: we write $x \cdot y \cdot z$ instead of $(x \cdot y) \cdot z$ or $x \cdot (y \cdot z)$, since there is no ambiguity.
A commutative monoid is one which additionally satisfies
Here are some examples of commutative monoids:
Non-examples:
Consider a monoid $(M, \cdot)$. Suppose $S$ is a finite set, and consider $f : S \to M$. Then we define $\prod\limits_{s \in S} f(s)$ as follows: First, write $S = \{s_1, \ldots, s_n\}$, where $n = |S|$. Then $\prod\limits_{s \in S} f(s) = f(s_1) \cdot f(s_2) \cdot \cdots \cdot f(s_n)$. We can show that this is well-defined - no matter what order we pick the elements of $S$, we get the same answer. If the monoid operator is written as $+$, we instead use $\sum\limits_{s \in S}$; if the operator is $\lor$, we use $\bigvee\limits_{s \in S}$, etc.
Consider monoids $(M_1, \cdot_1, e_1)$, $(M_2, \cdot_2, e_2)$. A monoid homomorphism from $M_1$ to $M_2$ is a function $f : M_1 \to M_2$ such that
We denote the set of monoid homomorphisms from $M_1$ to $M_2$ by $Mor(M_1, M_2)$ ($Mor$ stands for "morphism").
Examples:
Non-examples:
You should verify for yourself that
In other words, monoids together with monoid homomorphisms form a "concrete category".
Also, you should verify that if $M_1, M_2$ are commutative monoids, $f \in Mor(M_1, M_2)$, $S$ is finite, and $g : S \to M_1$, we have $\prod\limits_{s \in S} f(g(s)) = f(\prod\limits_{s \in S} g(s)$.
One more definition:
An isomorphism between monoids $M_1$ and $M_2$ is some $f \in Mor(M_1, M_2)$ such that there exists $g \in Mor(M_2, M_1)$ where $g \circ f = 1_{M_1}$ and $f \circ g = 1_{M_2}$. You should verify that an isomorphism between $M_1$ and $M_2$ is just a monoid homomorphism from $M_1$ to $M_2$ which is also a bijection. Two monoids are said to be isomorphic if there is an isomorphism from one to the other; all "nice" properties of monoids can be transferred across an isomorphism, so being isomorphic is a very strong condition.
We now have the tools we need to define the central notion: the coproduct.
Suppose that $I$ is a set, and suppose we have a family of commutative monoids $\{M_i\}_{i \in I}$. A coproduct of the family $\{M_i\}_{i \in I}$ consists of a commutative monoid $C$, together with a family $\{c_i \in Mor(M_i, C)\}_{i \in I}$, which satisfies the following property:
This is the notion that you are looking for.
We borrow a theorem from category theory:
Consider a set $I$ and a family of commutative monoids $\{M_i\}_{i \in I}$. Now suppose that $\forall i, j \in I, (i = j \lor i \neq j)$, and assume $\forall i \in I \forall x \in M_i (x = e \lor x \neq e)$. We then have the following:
Proof: we explicitly construct a coproduct $(D, d)$.
The underlying set of $D$ is the set of all pairs $(J, m)$ such that $J \subseteq I$ is finite, $m$ is a choice of elements $\{m_j \in M_j\}_{j \in J}$, and for all $j \in J$, $m_j \neq e$. The monoid operation defines $(J_1, m_1) \cdot (J_2, m_2) = (J_3, m_3)$, where $J_3 = (J_1 \setminus J_2) \cup (J_2 \setminus J_1) \cup \{j \in J_1 \cap J_2 \mid (m_1)_j \cdot (m_2)_j \neq e\}$ and
$$(m_3)_j = \begin{cases} (m_1)_j & j \in J_1 \setminus J_2 \\ (m_2)_j & j \in J_2 \setminus J_1 \\ (m_1)_j \cdot (m_2)_j & j \in J_2 \cap J_1, (m_1)_j \cdot (m_2)_j \neq e \end{cases}$$
We then define
$$d_i(x) = \begin{cases} (\{i\}, i \mapsto x) & x \neq e \\ (\emptyset, \emptyset) & x = e \end{cases}$$
and note that each $d_i$ is inedeed a monoid homomorphism. Let us note that for all $(J, m) \in D$, we have $\prod\limits_{j \in J} d_j(m_j) = (J, m)$.
Now suppose we have some commutative monoid $F$ and some family $\{f_i \in Mor(M_i, F)\}$. Then let's suppose we had some monoid homomorphism $f \in Mor(D, F)$ such that for all $i$, $f \circ d_i = f_i$. Then for all $(J, m) \in D$, we have $f(J, m) = f(\prod\limits_{j \in J} d_j(m_j)) = \prod\limits_{j \in J} f(d_j(m_j)) = \prod\limits_{j \in J} f_j (m_j)$. Conversely, we see that the map $f$, defined by $f(J, m) = \prod\limits_{j \in J} f_j (m_j)$, is a monoid homomorphism, and we see that for all $i \in I$, $f \circ d_i = f_i$. Thus, we have verified that $(D, d)$ is indeed a coproduct.
Now consider the unique monoid homomorphism $f : D \to C$ such that for all $i$, $f \circ d_i = c_i$. Then $(C, c)$ is a coproduct if and only if $f$ is an isomorphism, which is the case if and only if $f$ is a bijection, which is the case if and only if for all $x \in C$, there is a unique $(J, m) \in D$ such that $f(J, m) = x$. Recall from the above that $f(J, m) = \prod\limits_{j \in J} c_j(m_j)$, so to say there is a unique $(J, m) \in D$ such that $f(J, m)= x$ is exactly to say that there is a unique pair $(J, m)$, such that $J \subseteq I$ is finite, $m$ is a choice $\{m_j \in M_j\}_{j \in J}$, $\forall j \in J(m_j \neq e)$, and $x = \prod\limits_{j \in J} c_j(m_j)$. This completes the proof. $\square$
Let us go back to your first example involving the prime factorization of positive integers. For every prime number $p$, consider the monoid homomorphism $c_p \in Mor((\mathbb{N}, +), (\mathbb{Z}_+, \cdot))$ defined by $c_p(n) = p^n$. The fundamental theorem of arithmetic then states that $(\mathbb{Z}_+, c)$ is a coproduct.
Now, let us consider your example involving bit vectors. Let 2 represent the commutative monoid $\{0, 1\}$ under the $\lor$ operation, and let $2^n$ represent the commutative monoid of$n$-bit vectors under the bitwise $\lor$ operation. Then for each $i$ between 1 and $n$, consider the monoid homomorphism $c_i \in Mor(2, 2^n)$ which sends $1$ to the $i$th unit vector (and $0$ necessarily must go to the zero vector). Your final example simply sstates that $(2^n, c)$ is a coproduct.
Now, we can answer the question which I hinted at from the beginning. With the prime factorisation theorem, we found that there was a unique finite multiset of primes which represent the factorisation of a number. This is because a finite multiset of primes corresponds to a finite set $J$ of primes, together with a map $m : J \to \mathbb{N}$ such that for all $p \in J$, $m_p \neq 0$. On the other hand, a finite set of numbers between 1 and $n$ corresponds to a finite set $J$ of numbers between 1 and $n$, together with a map $m : \{1,\ldots, n\} \to 2$ such that for all $j \in J$, $m_j \neq 0$, since there is only one such $m$ for every such $J$. Both $\mathbb{N}$ and $2$ are generated by a single element, meaning that there is some element $1$ where all elements of the monoid can be achieved through the monoid operations and $1$, but the two monoids are different.