How should I compensate for numerical inaccuracy in fixed-point multiplication?

138 Views Asked by At

I am experimenting with fixed point arithmetic. I found a library for doing fixed-point math using vectors of 32-bit integers . At the end of the mulfpu (unsigned fixed point multiplication) function they "add 3 to compensate for the truncation". I figure they didn't feel like performing a bunch of mul_his on the lower order terms just to get a couple of bits exactly right.

Is there a general formula for figuring out what the proper fudge factor is for various forms of fixed point multiplication?

When I make a variation of this logic that uses three 4-vectors of 16-bit unsigned ints (ushort4 a[3]) and compare the results to python's mpmath using extra bits of precision the average error seems to be 8 quantums (with a range of roughly 4-12).

If I use a different variation that uses three 2-vectors of 32-bit unsigned ints (uint2 a[3]) the average error appears to be 3.4 quantums (with a range of roughly 1-6)

All of this is muddied by the fact that I don't know if there are defects in my multiplication routines, or if there are dark corners in python's mpmath library.