Simple proof of Inverse Discrete Fourier Transformation (IDFT)?

5.1k Views Asked by At

Discrete Fourier Transformation (DFT) is defined by:

$X_k = \sum_{n=0}^{N-1} x_n \exp(\frac{-2 \pi i k n}{N}) ; 0\leq k \leq N-1$

And Inverse Discrete Fourier Transformation (IDFT) is defined by:

$x_n = \frac{1}{N} \sum_{k=0}^{N-1} X_k \exp(\frac{2 \pi i k n}{N})$

How can I proof IDFT using DFT?

2

There are 2 best solutions below

0
On BEST ANSWER

Observe \begin{align} \frac{1}{N}\sum^{N-1}_{k=0} X_k \exp\left(\frac{2\pi i kn}{N} \right) =&\ \frac{1}{N}\sum^{N-1}_{k=0} \left(\sum^{N-1}_{m=0}x_m\exp\left(-\frac{2\pi i km}{N} \right)\right) \exp\left(\frac{2\pi i kn}{N} \right) \\ =&\ \frac{1}{N}\sum^{N-1}_{m=0}\sum^{N-1}_{k=0} x_m \exp\left(\frac{2\pi i k(n-m)}{N} \right)\\ =&\ \sum^{N-1}_{m=0}x_m\left(\frac{1}{N}\sum^{N-1}_{k=0} \exp\left(\frac{2\pi i k(n-m)}{N}\right)\right)\\ =& \sum^{N-1}_{m=0} x_m \delta_{n, m} = x_n. \end{align}

Note that I have used the fact that \begin{align} \frac{1}{N}\sum^{N-1}_{k=0} \exp\left(\frac{2\pi i k(n-m)}{N}\right)= \frac{1}{N}\frac{1-\exp(2\pi i(n-m))}{1-\exp(\frac{2\pi i(n-m)}{N})} =0 \end{align} if $n\ne m$. However, if $n=m$ then \begin{align} \frac{1}{N}\sum^{N-1}_{k=0} \exp\left(\frac{2\pi i k(n-m)}{N}\right)=\frac{1}{N}\sum^{N-1}_{k=0} 1 = \frac{N}{N} =1. \end{align}

Another way is to consider the DFT matrix. Let $e(x) = \exp(\frac{2\pi i x}{N})$ then \begin{align} F = \frac{1}{\sqrt{N}} \begin{bmatrix} e(1\cdot 1) & e(1\cdot 2) & \cdots & e(1\cdot (N-1))\\ e(2\cdot 1) & e(2\cdot 2) & \cdots & e(2\cdot (N-1))\\ \vdots & \vdots & \ddots & \vdots\\ \vdots & \vdots & \ddots & \vdots\\ e((N-1)\cdot 1) & \cdots & \cdots & e((N-1)\cdot (N-1)) \end{bmatrix}. \end{align}

Let us check that the columns of $F$ are orthonormal. Observe \begin{align} (\text{column m})^H(\text{column n}) = \frac{1}{N}\sum_{\ell=0}^{N-1}e(-\ell \cdot m)e(\ell \cdot n) = \frac{1}{N}\sum_{\ell=0}^{N-1}e(\ell \cdot (n-m)) = \delta_{n, m} \end{align} which means the columns are orthogonal. It's also clear that each column has norm 1. So $F$ is a unitary matrix, which means \begin{align} F^{-1} = F^H = \frac{1}{\sqrt{N}} \begin{bmatrix} e(-1\cdot 1) & e(-2\cdot 1) & \cdots & e(-(N-1)\cdot 1)\\ e(-1\cdot 2) & e(-2\cdot 2) & \cdots & e(-(N-1)\cdot 2)\\ \vdots & \vdots & \ddots & \vdots\\ \vdots & \vdots & \ddots & \vdots\\ e(-1\cdot(N-1)) & \cdots & \cdots & e(-(N-1)\cdot (N-1)) \end{bmatrix}. \end{align}

0
On

Vectorially, $X=\Omega\,x$ where $\Omega$ is the matrix made of the elements $\omega^{-kl}$, with $\omega=e^{2i\pi/n}$, which is an $n^{th}$ root of unity. We have to show that $\Omega^{-1}=\dfrac1n\Omega^*$, or $\Omega^*\Omega=nI$.

Taking a row of $\Omega^*$ and a column of $\Omega$, the elements of the product are

$$p_{kl}=\sum_{m=0}^{n-1}\omega^{km}\omega^{-ml}=\sum_{m=0}^{n-1}\left(\omega^{k-l}\right)^m.$$

Now $\omega^{k-l}$ is either $1$ (when $k=l$) or a different root of unity (when $k\ne l$), so that the sum of powers is either $n$ or $0$. Hence, $p_{kl}=n\delta_{kl}$.


The zero sum property is justified by

$$1+\omega+\omega^2+\cdots\omega^{n-1}=\frac{\omega^n-1}{\omega-1}=0$$ for $\omega\ne1$ and $\omega^n=1$.