Ever since my early days when nerdy science kids compared their computers speed by calculating ridiculous amount of digits of $\pi,e, \sqrt{2}$ I have been slightly curious about how to do this in practice.
How to use the Fast Fourier Transform to multiply two fixed precision digital numbers represented by bits?
I am able to do it both by convolution and FFT for real (complex) floating point represented numbers. But how to avoid floating point when doing the FFT? The version of DFT I (as an engineer) is used to is:
$$X_k = \sum_n \exp\left(\frac{i2\pi kn}{N}\right)x[n]$$
But the coefficients here, complex numbers, and irrational as well. It does not get much simpler doing a FFT factorization, we will still get irrational numbers, really nasty to represent to any finite precision using a value place system like the binary system.
So how is this done in practice?
Own work: The little bits of abstract algebra I have studied involves the cyclic groups and I recall the cyclic group being somehow a "friend" of complex roots of unity and also integer multiplication modulo some integer. Maybe that can be used somehow?
You might want to read the excellent reference http://numbers.computation.free.fr/Constants/Algorithms/fft.html, and especially Section 2.2 which deals with numerical errors. Generally, you cannot avoid finite precision of the Fourier coefficients, saving some special numbers. However, the game here is not floating or fixed point, but more the number of bits. That is, using 32 bit or 64 bits to express the coefficients is more important than if you are using fixed point of floating point. Using the FFT method helps in better exploiting the available bits, as the coefficients are often much smaller then the original numbers. I hope this helps.