Say you have a set
A = { 1, 5, 10 }
which when put through a function, produces the set:
B = { 1, 5, 10, 6, 11, 15, 16 }
That is, f(A) = B. I’m not sure the proper way to write the math for this, but the gist is that the output is made from all combinations of the input set, without duplicates. So, for example, here's how the elements are computed in the given example:
B = { 1, 5, 10, 5+1, 10+1, 10+5, 10+5+1 }
Each individual item in A is combined with the others so that the output is a list of unique values. That is, since 1+5 is the same as 5+1, we only use that value once. A more concise way to describe the function is that it is the set of unique values from all nonempty combinations.
The second (and most important) part to this is to go the other direction: given B, how do you figure out A?
Since I’m so new to this whole field, my question can be broken down into a few parts:
- What is the proper way to write the mathematics described above?
- What branch of mathematics would generally cover this problem?
- What would a solution to this specific problem look like?
I'm generally coming from more of a self-taught programming background, so I'm sure there are plenty of gaps in my knowledge. Sorry for my ignorance, and thanks in advance!
I've explored a bit of of matrix algebra for this, and I've had some success with combining A and its transpose, taking the lower triangular matrix from the result, and combining this with the original values of A. I was hoping there would be a straight-forward way to reverse this process? (Possibly wishful thinking?)
I think there may also be some way to puzzle through this by looking at which combinations of values in B produce the others, but I imagine there's a much more elegant way to go about it.
I've asked a corresponding question regarding the CS aspects of this problem here: https://stackoverflow.com/questions/56825784/is-there-a-standard-approach-to-solving-non-injective-functions
I would write it as $$ f(X) = \Bigl\{\sum_{a\in A} a \Bigm| A\subseteq X, A\text{ is finite}, A\ne\varnothing\Bigr\}$$
This is a combination of set builder notation and an indexed sum. Both of these are so widespread as part of "ordinary mathematical notation" that it's hard to assign them to a particular "branch" of mathematics. They're usually taught, more or less, in first-year courses that have a different main focus, as part of an effort to give students general familiarity with mathematical notation.
You could say they're part of set theory, but if you pick up a textbook dedicated to "set theory" in particular, it's probably going to zip past this kind of elementary notation in ten seconds flat.