Factorization in Fast Fourier Transform

391 Views Asked by At

I am trying to understand the FFT using matrix factorization, and there is just one step that I am unclear about. I took a look at this quesetion (Fast Fourier Transform as Matrix Factorization) but did not fully understand the last point in the answer. I have also tried taking a look at these notes by Gilbert Strang.

This is my current understanding of the algorithm. Suppose we wish to take the DFT of a signal with $2n$ components where $n = 2^k$ for some $k \in \mathbb{N}$. We can more efficiently do this by using the FFT algorithm, which starts by decomposing our signal into an even portion and an odd portion. The whole idea is we can instead use a DFT of size $n$ on these individual parts to save on computational cost, and of course we can do this recursively until we achieve $\mathcal{O}(n \log_2{n})$ efficiency.

More specifically, we factorize the Fourier matrix $F_{2n}$ into three matrices, $$F_{2n} = \begin{bmatrix} I & D \\ I & -D \end{bmatrix}\begin{bmatrix} F_{n} & 0 \\ 0 & F_n \end{bmatrix} P$$ where $P$ is a permutation matrix that separates even and odd columns of the signal, and $D$ is a diagonal matrix where the $i$th diagonal is the $i$th root of unity (starting with 0).

My confusion is regarding the first matrix in the factorization. Why is this here and how does it come about in a derivation? Is it simply what's required to match to the right hand side after we permute the columns? Is there an explicit way to arrive to this conclusion? Why exactly do we use the matrix $D$ here?

Secondly, is my whole understanding of the algorithm correct? Am I missing any details?