I would like to know what is the state-of-art in the research of the discrete Fourier transform. I have listed some questions to help answering, please add your own to make the list more comprehensive.
1) Is it possible to compute DFT faster than in $O(N\log N)$? Is there proofs for or against this?
2) For sparse inputs, I heard there is a faster than FFT algorithm now. Are there any other faster than FFT algorithms for some spesific input types?
3) Is it known how many additions and multiplications, respectively, is needed to compute the DFT for inputs of size $2^P$? Has a lower bound been proven (for either or both)?
4) What other improvements has been made during the past ten years or so?
Proofs, explanations, and URLs to recent good papers are welcome. I know that I have several questions here, but I think it serves a good purpose. For someone who would like to enter the field, it is really difficult to find the correct things from the thousand's of publications and other materials in the internet.
I'm not sure if you've heard of FFTW. Rather than requiring that the input vector have a length that is a power of $2$ (and zero-padding if it doesn't), it performs a prime factorization of the length of the input, breaks the input up into vectors of such prime lengths, and is able to perform a fast transform on the resulting subvectors. The result is faster than a traditional FFT. Visit the site for details.