Suppose that we have a sequence of bits defined for each natural less than $n$. For example, if we have 3 bits (either 0 or 1), we can represent the sequence as a function of $x < 4$ in the following table, where $\omega = e^{2 \pi i / 3}$:
$$ \begin{array}{|ccc|c|} \hline f(1) & f(2) & f(3) & \text{representative function} \\ \hline 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & \frac{\omega^{x+1} + \omega^{2(x+1)} + \omega^{3(x+1)}}{3} \\ 0 & 1 & 0 & \sin{(\pi(x-1))} \\ 0 & 1 & 1 & 1 - \frac{\omega^{x} + \omega^{2x} + \omega^{3x}}{3} \\ 1 & 0 & 0 & \frac{\omega^{x} + \omega^{2x} + \omega^{3x}}{3} \\ 1 & 0 & 1 & \sin{(\pi x)} \\ 1 & 1 & 0 & 1 - \frac{\omega^{x+1} + \omega^{2(x+1)} + \omega^{3(x+1)}}{3} \\ 1 & 1 & 1 & 1 \\ \hline \end{array} $$
What I'm wodering is, if we can use any function, how much can we compress a the computations for a sequence of $n$ bits, on average?
In the example above, I had trouble doing compression for roughly half of the sequences, and had to resort to roots of unity. I'm wondering if there is a better way to compress sequences. I have a feeling that if we could take $f(x) = ((g(x) \bmod y) \bmod z) \dots$ we would have better results.