Fourier Analysis

1.9k Views Asked by At

I am interested in Fourier Analysis. But I don't get why the coefficients are choosen that way and why the Fourier series will converge to a given function?

Can someone provide me simple information or do you know an easy source for a beginner?

Regards, Kevin

5

There are 5 best solutions below

1
On

The Fourier series converges given that the $f\in L^2(-l,l)$ for the given interval. That is $f$ is a function that satisfies $\int_{-l}^{l}f^2 \, dx<+\infty$. There is a proof that shows that the series converges with this condition.

The way to understant why coefficients are calculated as they are is to think that you are using a vector space of infinite dimension. That is $\{1,\sin({k\frac{\pi}{x}}),\cos({k\frac{\pi}{x}})\}$ (if you use a different basis is equivalent) is a basis of $L^2(-l,l)$, and any function that belongs to this space can be generated as an infinite linear combination of elements of the base. When you calculate the coefficients you are just calculating the projections of the function over each 'direction' given but each element of the basis. Of course the projection is donde using the dot product of the metric space we are using $L^2(-l,l)$, which is: $$\langle u,v\rangle=\int_{-l}^{l}u\cdot v\,dx$$ where $u,v\in L^2(-l,l)$.

You can find some notes about the topic here. This notes are from MIT and quite nice for the beginner.

3
On

You can try to read something in Rudin's book (pg 185). http://www.scribd.com/doc/9654478/Principles-of-Mathematical-Analysis-Third-Edition-Walter-Rudin

In my opinion Stein and Shakarchi Fourier Analysis is a clear and perfectly written textbook. http://press.princeton.edu/titles/7562.html

4
On

You ask some pretty profound questions. I will answer them as simply as I can. Before I do that, (and by now I seem like a shill for my old professor), I recommend the book from which I learned all of this, Applied Analysis by the Hilbert Space Method by Samuel S. Holland. Despite the lumbering title, it really is an excellent book for undergraduates who have never heard of a Fourier series.

When people speak of a Fourier series, they might speak of something like this

$$f(x) = \frac{a_0}{2} + \sum_{n=1}^{\infty} (a_n \cos{n x} + b_n \sin{n x})$$

Behind this equation, though, buried inside, are some really profound things and assumptions about the very function we have chosen to write this way. There are even profound assumptions behind what the equals sign means!

First of all, what can we say about $f$? Well, because we have chosen to set it equal to a bunch of sines and cosines of increasing frequencies, and those frequencies are just integers, we can say that $f$ must be periodic with frequency $2 \pi$. That is

$$f(x+2 \pi) = f(x)$$

I hope you can see that. To do so, plug in $x+2 \pi$ into the right hand side (RHS). And then plug in something else, like $x+\pi$, and see what happens.

OK, so you can see that not all functions may be written like that; they have to be periodic with period of $2 \pi$. What about the series itself? It is an infinite series, which means that we have to worry about whether it converges. And this is a very sticky point, because Fourier series do not converge to a single point $x$ like Taylor series. I'll talk about why that is in a minute. But, rather, allow me to state mathematically the meaning of the equal sign in the representation of $f$ as a Fourier series:

$$\lim_{N \rightarrow \infty} \int_0^{2 \pi} dx \left [ f(x) - \frac{a_0}{2}- \sum_{n=1}^{N} (a_n \cos{n x} + b_n \sin{n x}) \right ]^2 = 0$$

The integral itself is what we call a mean square error - the average of the square of the error between $f$ and a partial sum. We define convergence not by convergence at a point, but by convergence in an average sense over the interval $[0,2 \pi]$. We do this so that we may speak of representing piecewise continuous objects like square waves, sawtooths, and the like. It turns out that near points of discontinuity, the series may not converge to those points - you may have heard of Gibbs phenomena.

Now, the most important question: how do we determine the coefficients? We go back to the error definition. Define

$$E(a_0,a_1,\ldots,a_N,b_1,\ldots,b_N) = \int_0^{2 \pi} dx \left [ f(x) - \frac{a_0}{2}- \sum_{n=1}^{N} (a_n \cos{n x} + b_n \sin{n x}) \right ]^2 $$

We naturally choose the $a_k$ and $b_k$ so that the error $E$ is a minimum. We simply demand that

$$\frac{\partial E}{\partial a_k} = 0$$ $$\frac{\partial E}{\partial b_k} = 0$$

for all $k \in \{0,1,2,\ldots,N\}$. Let's work with the $b_k$, as the operations on the $a_k$ are nearly identical. We use the fact that we can take the derivative inside the integral (remember, the sum is finite and everything is well-behaved), and we get

$$\frac{\partial E}{\partial b_k} =2\int_0^{2 \pi} dx \left [ f(x) - \frac{a_0}{2}- \sum_{n=1}^{N} (a_n \cos{n x} + b_n \sin{n x}) \right ] (-\sin{k x}) = 0$$

Rearranging this a bit, we get

$$\int_0^{2 \pi} dx \: f(x) \sin{k x} = \sum_{n=1}^{N} \int_0^{2 \pi} dx \:(a_n \cos{n x} + b_n \sin{n x}) \sin{k x}$$

This normally would require us to solve $2N+1$ equations in the $2 N+1$ unknowns, which would make for a terrifying representation. But, we do have one trick up our sleeve: the sines and cosines of integer frequencies are very special in that they satisfy this really cool sifting property:

$$\int_0^{2 \pi} dx \: \sin{n x} \, \sin{k x} = \begin{cases} \\ \pi & n=k \\ 0 & n \ne k\end{cases}$$

$$\int_0^{2 \pi} dx \: \cos{n x} \, \sin{k x} = 0$$

You can prove this to yourself using the sine and cosine addition formulae:

$$2 \sin{a x} \, \sin{b x} = \cos{(a-b) x} - \cos{(a+b) x}$$ $$2 \sin{a x} \, \cos{b x} = \sin{(a+b) x} - \sin{(a-b) x}$$

and noting that the integral of a sine or cosine of a nonzero integer frequency over $[0,2 \pi]$ is zero. Therefore, the coefficients of the series are given by

$$a_n = \frac{1}{\pi} \int_0^{2 \pi} dx \: f(x) \cos{n x}$$ $$b_n = \frac{1}{\pi} \int_0^{2 \pi} dx \: f(x) \sin{n x}$$

Further details and explanations may be found in the book I cited, Chapter 3.

2
On

The best way I've found to think about Fourier analysis is: as the linear algebra of the vector space $L^2(S^1)$ in the basis $e_k(x) = e^{2\pi ikx}$. In this perspective, as Myke Arya points out, you have an inner product $$\langle f,g\rangle = \int_{S^1} f(x)g(x)dx$$ and Fourier series are just an orthonormal basis decomposition: $$f(x) = \sum_k \langle f,e_k\rangle e_k(x).$$

This subject is not only interesting in its own right, it is deeply related to, for example, representation theory (the functions $e_k$ are unitary representations of $\mathbb{R}$ on $\mathbb{C}$) and spectral analysis ($\Delta e_k = 2\pi ke_k$, where $\Delta$ is the Laplacian operator).

I have found the book Fourier Series and Integrals, by Dym and McKean, to be very helpful as I have learned the subject.

0
On

It seems to me, that you are interested in the identity \begin{equation} f(x) = \sum_{k=-\infty}^\infty \frac{1}{2\pi} \int_{-\pi}^\pi f(t) e^{-ikt} dt e^{ikx} \ , \end{equation} that is true for example if $f$ is $2\pi$-periodic and piecewise differentiable with finite number of pieces on the interval $[-\pi,\pi]$ and satisfies \begin{equation} f(x) = \frac{1}{2} (\lim_{t \rightarrow x^+} f(t) + \lim_{t \rightarrow x^-} f(t)) \end{equation} for every $x \in \mathbb{R}$. To see this, we define \begin{eqnarray} f(x+) & = & \lim_{t \rightarrow x^+} f(t) \ , \\ f(x-) & = & \lim_{t \rightarrow x^-} f(t) \ . \end{eqnarray} We calculate for $x \neq 2\pi k$ \begin{eqnarray} \sin(1/2 \ x) \sum_{k=-n}^n e^{ikx} & = & \sin(1/2 \ x) (e^{-ix})^n \frac{1-(e^{ix})^{2n+1}}{1-e^{ix}} = \sin((n+1/2)x) \ . \end{eqnarray} The left hand side of the first equation equals to the right hand side of the second equation also for $x = 2\pi k$. We define \begin{eqnarray} h_1(t) & = & \frac{f(x-t+)-f(x+)}{\sin(1/2 \ t)} \ , t \in (-\pi,0) \ , \\ h_2(t) & = & \frac{f(x-t-)-f(x-)}{\sin(1/2 \ t)} \ , t \in (0,\pi) \ . \end{eqnarray} Then \begin{eqnarray} \sin(1/2 \ t) h_1(t-) & = & f(x-t+)-f(x+) \ , t \in (-\pi,0] \ , \\ \sin(1/2 \ t) h_2(t+) & = & f(x-t-)-f(x-) \ , t \in [0,\pi) \ . \end{eqnarray} Note that $h_1(0-)$ equals to $-2$ times right derivative of $f(t+)$ at $x$ and $h_2(0+)$ $-2$ times left derivative of $f(t-)$ at $x$. This is obtained by dividing and multiplying the expression of $h_k$ by $1/2 \ t$ and taking limits when argument approaches to $0$ from left and right respectively. We calculate using the Riemann-Lebesgue lemma and obtain \begin{eqnarray} \sum_{k=-\infty}^\infty & \frac{1}{2\pi} & \int_{-\pi}^\pi f(t) e^{-ikt} dt e^{ikx} = \sum_{k=-\infty}^\infty \frac{1}{2\pi} \int_{x-\pi}^{x+\pi} f(t) e^{-ikt} dt e^{ikx} \\ & = & \lim_{n \rightarrow \infty} \sum_{k=-n}^n \frac{1}{2\pi} \int_{x-\pi}^{x+\pi} f(t) e^{-ikt} dt e^{ikx} = \lim_{n \rightarrow \infty} \frac{1}{2\pi} \int_{x-\pi}^{x+\pi} f(t) \sum_{k=-n}^n e^{ik(x-t)} dt \\ & = & \lim_{n \rightarrow \infty} \frac{1}{2\pi} \int_{-\pi}^\pi f(x-t) \sum_{k=-n}^n e^{ikt} dt \\ & = & \lim_{n \rightarrow \infty} \frac{1}{2\pi} \int_{-\pi}^0 f(x-t+) \sum_{k=-n}^n e^{ikt} dt + \lim_{n \rightarrow \infty} \frac{1}{2\pi} \int_0^\pi f(x-t-) \sum_{k=-n}^n e^{ikt} dt \\ & = & \lim_{n \rightarrow \infty} \frac{1}{2\pi} \int_{-\pi}^0 (f(x-t+) - f(x+)) \sum_{k=-n}^n e^{ikt} dt + \frac{1}{2} f(x+) \\ & & + \lim_{n \rightarrow \infty} \frac{1}{2\pi} \int_0^\pi (f(x-t-) - f(x-)) \sum_{k=-n}^n e^{ikt} dt +\frac{1}{2} f(x-) \\ & = & \lim_{n \rightarrow \infty} \frac{1}{2\pi} \int_{-\pi}^0 \sin(1/2 \ t) h_1(t-) \sum_{k=-n}^n e^{ikt} dt + \frac{1}{2} f(x+) \\ & & + \lim_{n \rightarrow \infty} \frac{1}{2\pi} \int_0^\pi \sin(1/2) h_2(t+) \sum_{k=-n}^n e^{ikt} dt +\frac{1}{2} f(x-) \\ & = & \lim_{n \rightarrow \infty} \frac{1}{2\pi} \int_{-\pi}^0 h_1(t-) \sin((n+1/2)t) dt + \frac{1}{2} f(x+) \\ & & + \lim_{n \rightarrow \infty} \frac{1}{2\pi} \int_0^\pi h_2(t+) \sin((n+1/2)t) dt +\frac{1}{2} f(x-) \\ & = & \frac{1}{2} (f(x+) + f(x-)) = f(x) \ . \end{eqnarray} In the fifth equation integrands on the right hand side are changed to their right and left limits respectively. This doesn't influence to the value of the integral because the difference is nonzero on a finite set only. The function $h_1(t-)$ has a limit as $t \rightarrow 0^-$. Hence it is bounded in a boundary of $0$. The expression of $h_1$ is also bounded elsewhere. Similar reasoning shows that also $h_2$ is bounded. Hence they are absolutely integrable and we may apply the Riemann-Lebesgue lemma. This shows that Fourier series converges to the function, from which the coefficients were calculated.

Consider now two Fourier serieses of the same function $f$, \begin{eqnarray} f(x) & = & \sum_{k=-\infty}^\infty c_k^1 e^{ikx} \\ f(x) & = & \sum_{k=-\infty}^\infty c_k^2 e^{ikx} \ . \end{eqnarray} Sidewise substraction implies \begin{equation} 0 = \sum_{k=0}^\infty (c_k^1-c_k^2) e^{ikx} \ . \end{equation} We also have \begin{equation} e^{inx} = \sum_{k=-\infty}^\infty \delta_{n,k} e^{ikx} \ . \end{equation} Parseval's theorem now implies \begin{eqnarray} \frac{1}{2\pi} \int_{-\pi}^\pi 0 \overline{e^{inx}} dx & = & \sum_{k=-\infty}^\infty (c_k^1-c_k^2)\overline{\delta_{n,k}} \\ 0 & = & c_n^1-c_n^2 \ . \end{eqnarray} Hence $c_n^1=c_n^2$ for every $n \in \mathbb{Z}$. This shows that Fourier series is unique, that is there is only one way to choose the coefficients. It is \begin{equation} c_k = \frac{1}{2\pi} \int_{-\pi}^\pi f(t) e^{-ikt} dt \ . \end{equation}

Fourier inversion theorem is more difficult to prove.