Fourier Transform: Understanding change of basis property with ideas from linear algebra

5.9k Views Asked by At

The notion of Fourier transform was always a little bit mysterious to me and recently I was introduced to functional analysis. I am a beginner in this field but still I am almost seeing that the Fourier transform can be viewed as a change of basis in a space of functions. I read the following article here which tries to build an intuition:

https://sites.google.com/site/butwhymath/fourier-analysis/the-fourier-transform

Now, I can see that the Fourier and Inverse Fourier tranforms are projecting and projecting back a function $f(x)$ onto and from the basis of complex exponentials, $e^{i2\pi sx}$, respectively: $$ F(s) = \int_{-\infty}^{\infty}f(x)e^{-i2\pi sx}dx$$

$$ f(x) = \int_{-\infty}^{\infty}F(s)e^{i2\pi sx}ds$$

Here are my questions about this view to make it more clear:

1)If I understand correctly, this operation is akin to regular linear algebra change of basis operations $a=Mb$ and $b=M^{-1}a$. Roughly, in this case $M$ is a matrix of uncountable many rows and columns where each row is $e^{-i2\pi sx}$, a function of $x$, and similarly $M^{-1}$ has rows as $e^{i2\pi sx}$, functions of $s$. Is this interpretation correct?

2)I don't have the exact rigour for this but intuitively think, if 1) is a correct interpretation then we should obtain from the "infinite dimensional" matrix multiplication $MM^{-1}$ something which resembles an infinite dimensional identity matrix. To test that, I built the inner product where $s$ is held fixed and equal in both terms from two objects $M$ and $M^{-1}$, which should correspond to a "diagonal" element of $MM^{-1}$: $\int_{-\infty}^{\infty}e^{i2\pi sx}e^{-i2\pi sx}dx = \int_{-\infty}^{\infty}e^{0}dx=\infty$. So this is not $1$ as expected from an identity matrix. What is the reason of that?

2

There are 2 best solutions below

6
On BEST ANSWER

If you have an orthonormal basis $\{ e_{k} \}_{k=1}^{N}$ on a finite-dimensional space, such as what you would obtain with Gram-Schmidt, then every vector $x$ is expressed as $$ x = \sum_{k} (x,e_{k})e_{k}. $$ This extends to $L^{2}[0,2\pi]$ using $e_{k} =\frac{1}{\sqrt{2\pi}}e^{ikx}$: $$ f = \sum_{k}(f,e_{k})e_{k} $$ The importance of this basis is that it consists of eigenvectors of $Lf=\frac{1}{i}\frac{d}{dx}$ because $Le_{k}=ke_{k}$. So this basis diagonalizes the differentiation operator. Finally, the same thing holds in a continuous sense on $L^{2}(\mathbb{R})$ with \begin{align} f & = \int_{k} (f,e_{k})e_{k} \\ & = \frac{1}{2\pi}\int_{-\infty}^{\infty}\left(\int_{-\infty}^{\infty}f(t)e^{-ikt}dt\right)e^{ikx}dk \end{align} This is a generalization rather than a precise extension because $e_{k}=\frac{1}{\sqrt{2\pi}}e^{ikx}$ is not--strictly speaking--an eigenfunction of $L=\frac{1}{i}\frac{d}{dx}$ because $e_{k} \notin L^{2}(\mathbb{R})$ due to the fact that the function is not square integrable. However, for every $\delta > 0$, the following is square integrable $$ e_{k,\delta}=\frac{1}{\sqrt{2\pi}}\int_{k-\delta}^{k+\delta}e_{k}(x)dk, $$ and, in the norm of $L^{2}$, it becomes closer and closer to an eigenvector with eigenvalue $k$ as $\delta\downarrow 0$: $$ \|Le_{k,\delta}-ke_{k,\delta}\| < \delta\|e_{k,\delta}\|. $$ So the Fourier transform is the coefficient function and the expansion of $f$ looks very much like a "continuous" (i.e., integral) expansion of $f$ in approximate eigenfunctions of the differentiation operator. (The $e_{k,\delta}$ are even mutually orthogonal if the intervals $(k-\delta,k+\delta)$ do not overlap.)

As a final note, to make this generalization more precise, $$ \|x\|^{2} = \sum_{k}|(x,e_k)|^{2} $$ also holds for the continuous orthogonal expansion: $$ \|f\|^{2} = \int_{k} |(x,e_{k})|^{2}dk. $$ This is how Parseval saw it, who is the person after whom Parseval's identity is named: $$ \int_{-\infty}^{\infty}|f(x)|^{2}dx = \int_{-\infty}^{\infty}|\hat{f}(k)|^{2}dk. $$

4
On

You can imagine derivative as a matrix such as this, as the corresponding values of each element approach zero (limit definition):

enter image description here

Or integration as like this(riemann sum):

enter image description here

Moreover, fourier transform already has a matrix representation for discrete case

https://en.wikipedia.org/wiki/DFT_matrix

You need extend this matrix to infinity and shrink the corresponding interval to zero (riemann sum) to get the continuous transform.

$$\int_{-\infty}^{\infty}e^{0}dx=\infty$$

Think of this as summing up all the ones in the diagonal. The diagonal components have the value 1, they correspond to an infinitesimal interval (dx) and when you sum infinitely many of them you get infinity.

Alternatively think about what would happen if you multiplied the derivative matrix and the integral matrix. You would get the identity matrix. By integrating you would be summing the ones on the diagonal.