How can one do fast bit-wise inverse derivative / gradient over $Z^2$ or $GF(2)$?

17 Views Asked by At

In computer language what I have is a "bit-stream" in 1D or a "bit-map" in 2D.

With boolean algebra "xor" corresponds to difference or sum in $Z_2$ or GF(2), whereas "and" corresponds to multiplication.

A finite difference operator traditionally used in engineering over more continuous fields is [1,-1]/2. We can in $Z_2$ approximate it with [1,1].

To convolve with it will mean the same as xor between neighboring elements in a vector or if on a 2-Dimensional structure as a partial differential filter.

For non-finite fields we can pose and solve inverse-gradient problems. This is popular in engineering and sciences as differential equations, scalar potentials and so on are so important there.

Now to my question : how would one calculate a fast finite-field version of this?


Own work In 1D the problem simply becomes to find the integral operator. Which will be ones in the future and 0 otherwise. In other words a cumulative loop of "xor":s. Does this make sense?

For example the vector [0,0,0,1,1,1,0,0,0,1,1,1]

will have "derivative"

[0,0,0,1,0,0,1,0,0,1,0,0,0]

Then "integrating" it $$ 0 \oplus 0 = 0\\ 0 \oplus 0 = 0\\ 0 \oplus 0 = 0\\ 0 \oplus 1 = 1\\ 1 \oplus 0 = 1\\ 1 \oplus 0 = 1\\ 1 \oplus 1 = 0\\ 0 \oplus 0 = 0\\ 0 \oplus 0 = 0\\ 0 \oplus 1 = 1\\ 1 \oplus 0 = 1\\ 1 \oplus 0 = 1$$

The right hand side becomes the first operand in the next calculation and the prescribed differential / difference vector is the second operand.

This is easy to do in electronics with a shift registry or low level computer software as a loop if we have access to xor and bit shifts, but what if we want to do it faster than that? Perhaps in parallell or "one go" if possible?

A few years ago I learned of the new "popcnt" CPU instruction.

Will a "masked" population count work? Something like :

xor_val_of_var_at_pos_N = popcnt(var&((1<<(N+1))-1))&0x1;

And even more complicatedly > for 2 or more dimensions, where an exact solution is perhaps not even guaranteed to exist, how to approach the problem there? Which norm to aim to minimize? Will we have to relax the problem to a continous domain and try and optimize there?