Derivation of Fourier Series

289 Views Asked by At

Despite a lengthy search, I've been unable to turn up a derivation of the Fourier Series where the author doesn't begin by giving:

$$\frac{a_0}{2}+\sum_{n=1}^\infty \left(a_n \cos(\frac{2\pi}{T}nx) + b_n \sin(\frac{2\pi}{T}nx)\right)$$

And from this deriving coefficients $a_n$, $b_n$. Where does this first formula come from? I've been able only to find scraps of information, and what I have stumbled upon lacked much explanation and/or was very esoteric.

1

There are 1 best solutions below

2
On

There is a nice formal answer here, and the comment by @Qiaochu Yuan making reference to the solution to heat equation $$\frac {\partial u}{\partial t}=\alpha \left( u_{xx} + u_{yy} + u_{zz} \right)=\alpha \Delta u\tag 1$$ where $u$ is the temperature at a point and time and $\Delta u$ the Laplacian (sum of second partial derivatives), which would be sketched out like this in one dimension with $P$ spanning from $[0,1)$ and $0$ at the boundaries:

The solution will be given by a function of space $f(x)$ $\times$ a function of time $T(t).$ Since on the LHS of $(1)$ we have the partial with respect to time $fT'=\alpha \Delta f\cdot T.$ And dividing by $Tf$ on both sides, $\frac{T'}T=\alpha \frac{\Delta f}f,$ a constant at all points and times, which has to be decreasing, and can be equated to a constant term $-\alpha \omega^2.$ This yields to differential equations: $T'=-\alpha \omega^2 T,$ which has an exponential solution $T(t)= C e^{-\alpha \omega^2 t},$ and an eigenfunction equation, $\Delta f(x)=-\omega^2 f(x),$ where $-\omega^2$ is the eigenvalue. This leads to any solution of the form $u(x,t) = \sum C_\omega f(x) \cdot e^{-\alpha \omega^2 t}$ that must satisfy the eigenfunction equation. Now, this eigenfunction has a second derivative on the LHS, and the solution will be of the general form $f= a \cos(\omega x) + b \sin(\omega x).$ This can be further solved noting that the initial condition is $0$ at the point $0$ of the interval $[0,1),$ eliminating the contribution of cosines. The fact that the boundary at the other extreme of the interval is also $0$ makes eigenvalues integer multiples of $\pi,$ resulting in a solution of the form

$$u(x,t)=\sum_{n=1}^\infty C_n \sin(n\pi)\cdot e^{-\alpha n^2\pi^2t}$$

The motivation behind the definition of the Fourier series is to express any minimally well behaved periodic function $s(x)$ as an infinity sum of sinusoid waves. This can be achieved just with sines or with cosines (earlier format of jpegs being an example of the latter), and introducing a phase shift: after all, cosines are just sine waves shifted by ninety degrees.

Each one of the terms in the sum, for example, $\cos\left(2\pi\frac{n}{P}x\right)$ and $\sin\left(2\pi\frac{n}{P}x\right)$ are elements in a basis function of a vector space (thank you @mjw), corresponding to the harmonic frequency $\frac n P.$ In here $P$ stands for the unit interval. The factor $2\pi$ can be understood as dividing one cycle around the unit circle by the length of an interval. $n$ can be interpreted as stops on a dial.

The idea is that the signal function that we wish to express as the sum of sinusoidal waves is "dotted" with each one the basis functions across the domain $P:$ The integral that will be used to solve for each coefficient $a_n$ and $b_n$ is an inner product; a projection of the signal onto the basis function, much like the dot product of two vectors is maximal when the two vectors are aligned, the weight (coefficient) of a particular basis function in the (theoretical - clearly not in practice) infinite series will reflect the ability of that basis function to approximate the function $s(x).$

Picking up on @mjw comment, the basis functions $\left\{1,\cos\left(2\pi\frac{n}{P}x\right), \sin\left(2\pi\frac{n}{P}x\right) \right\}$ are orthogonal, rendering the determination of the coefficients feasible by simply assuming that $s(x)$ can indeed be represented as a FS, i.e.

$$s(x)=\small\frac{a_0}{2}+\sum_{n=1}^\infty \left(a_n \cos\left(2\pi\frac{n}{P}x\right) + b_n \sin\left(2\pi\frac{n}{P}x\right)\right)$$

and multiplying both sides by each one of the basis functions that will be considered, and proceeding to integrate (inner product) over the period $P$. The orthogonality of these basis vectors will nullify all summands, except for the one being extracted, which will be multiplied by $\small \frac 1 2 P, $ explaining the adjustment in front of the integral in the formula for the coefficients. If $P=2\pi,$ the coefficient of the cosine part of the equation for $n=1$ can be extracted by multiplying by $\cos(1x)$ and integrating $a_{n=1}=\frac 2 {2\pi} \int_{-\pi}^\pi s(x) \cdot \cos\left(x\right)dx,$ for example, because in the expansion of $s(x),$ only $\cos(x)\cos(x)$ will be different to zero:

enter image description here