Formal notion of "which indices contain variation" in a set of bitstrings

43 Views Asked by At

I am interested in defining a function $f$ that maps a set of length-n bitstrings to a list of n numbers in $[0,1]$. Each number can be interpreted as indicating how much variation there is among those bitstrings, that involves that index. For example:

  • $f(\{0,1\})= \langle 1 \rangle$ (the set contains two elements, so one bit of variation, and there is only one index that it involves)

  • $f(\{001, 000\}) = \langle 0,0,1 \rangle $ (there is one bit of variation, and it only involves the final index)

  • $f(\{010, 011, 100, 101\}) = \langle \frac{1}{2}, \frac{1}{2}, 1 \rangle $ (there are two bits of variation, and half of it involves the last index)

  • $f(\{100, 010, 001, 111\}) = \langle \frac{2}{3},\frac{2}{3},\frac{2}{3}\rangle $ (there are two bits of variation, and they involve all of the indices.)

(The sum of outputs will always equal $log_2 m$, where $m$ is the number of bitstrings provided.)

Is there a well-known function that does this, or some terms that I should look up, in order to understand this problem?