Let us have about 100 or so random (exact) floats such as:
$$ A_1 = 1234123.428\\ A_2 = 13713.4193\\ A_3 = 0.1332\\ A_4 = 123.13213\\ ...$$
Now I want to find an efficient way to get the fractional part of:
$A_1 \times A_2 \times A_3 ..... \times A_n$
to within 4 digits, say. Is there an efficient algorithm to do this? Note, that we can't simply multiply all the numbers together because we assume we are using machine precision floating point math only is accurate to say 10 significant figures and so the fractional part would be lost.
Edit: We are assuming the floats $A_1$ are exact. Also, we know we could use some BigNumber library to multiply all the values to exact precision and then look at the fractional part. But I'm looking for a more efficient algorithm (if one exists). Perhaps using some simplifications using modular arithmetic.
It is even worse than that. The fractional part is not even well defined. Your $A_1$ is of order $10^7$ so there is uncertainty of the order of $10^{-3}$, which means the last place could be wrong. Now if I just square $A_1$ and represent the error by $\delta$ we have $(A_1+\delta)^2\approx A_1^2+2\delta A_1$. The leading term is of order $10^{14}$ and $\delta A_1$ is of order $10^4$. Even if we can do this square exactly the input data accuracy says we do not know the four digits in front of the decimal point, let alone the fractional part.
If you want to take your input data as exact, you can certainly implement a multiple precision multiply, multiply accurately enough to make sure you have the digits right, and take the fractional part. I don't think there is anything better.