Why do we use trig functions in Fourier transforms, and not other periodic functions?

7.6k Views Asked by At

Why, when we perform Fourier transforms/decompositions, do we use sine/cosine waves (or more generally complex exponentials) and not other periodic functions? I understand that they form a complete basis set of functions (although I don't understand rigourously why), but surely other period functions do too?

Is it purely because sine/cosine/complex exponentials are convenient to deal with, or is there a deeper reason? I get the relationship these have to circles and progressing around them at a constant rate, and how that is nice and conceptually pleasing, but does it have deeper significance?

7

There are 7 best solutions below

6
On

The Fourier basis functions $e^{i \omega x}$ are eigenfunctions of the shift operator $S_h$ that maps a function $f(x)$ to the function $f(x - h)$: $$ e^{i \omega (x-h)} = e^{-i\omega h} e^{i \omega x} $$ for all $x \in \mathbb R$.

All of the incarnations of the Fourier transform (such as Fourier series and the discrete Fourier transform) can be understood as changing basis to a basis of eigenvectors for a shift operator.

It is possible to consider other operators, which have different eigenfunctions leading to different transforms. But this shift operator is so simple and fundamental that it's not surprising the Fourier transform turns out to be particularly useful.

3
On

There is no direct mathematical reason to use sine/cosine/exponential. In fact you can define a similar decomposition using any orthogonal basis of the square integrable functions. For example you could decompose a function on an interval using the Legendre polynomials or in a more general sense take any sufficiently nice basis and do what is called a Wavelet transform. Most of the properties of the Fourier transform such as for example isometry will still hold, with proofs that are very much identical.

There are however a lot of indirect reasons to use sine/cosine/exponential, namely that they have a lot of nice and useful properties, mostly related to differentiation. Just to name a few of the top of my head:

  • They are eigenfunctions of the differential operator. That is, they tend to reproduce under differentiation. $\frac{d}{dt} e^{ikt}$ again is a multiple of $e^{ikt}$. We can use this to solve differential equations by turning them into simple linear equations by taking the transform.
  • They are the solutions of the simple harmonic oscillator $\ddot{f} = -kf$. This equation (or variations of it) turns up extremely often in physics and thus it is no wonder that Fourier series or transform is useful when dealing with such problems. (And indeed for other equations you will need a different transform)
  • They are analytic and periodic. I know that you can turn any function on an interval into a periodic one, however since sine/cosine/exponential correspond to their own power series, they are kind of "naturally periodic".
2
On

There is some deeper significance from the point of view of representation theory.

For the Fourier transform on the circle, functions of the form $e^{ikx}$ (depending on the period/normalization) are precisely the characters, irreducible complex representations of the group $\mathbb T$ (which you can think of as ${\mathbf R}/{\mathbf Z}$, ${\mathbf R}/{\mathbf 2\pi \mathbf Z}$, or as the complex numbers of norm $1$, or another renormalization you prefer).

Functions $\sin(kx)$ and $\cos(kx)$ are the matrix coefficients of the irreducible real representations of the group.

Similarly, for the real line, $e^{i\xi x}$ are the irreducible complex unitary representations of the group $(\mathbf R,+)$, while $\sin(\xi x)$, $\cos(\xi x)$ are the matrix coefficients of the irreducible orthogonal real representations.

Representation theory gives a precise sense to the Fourier transform for any locally compact group (and probably more, but I'm no specialist), and in the abelian case, we have Pontryagin duality, which is responsible for the inverse Fourier transform.

2
On

I will respectfully disagree with littleO, and elaborate on the answer of mlk, and argue that the even more fundamental reason for the choice of the trig functions as basis functions is that they are the eigenfunctions of the Laplacian.

The smoothness (in the geometric, and not analytic, sense) of a function on $S^1$ can be measured by calculating its Dirichlet energy $$E(f) = \int \langle \nabla f, \nabla f\rangle,$$ where after applying integration by parts, $$E(f) = -\int f\Delta f = -\langle f, \Delta f\rangle.$$

The Laplacian is self-adjoint and negative, and its eigenfunctions are the usual sines and cosines. Let us sort them in ascending order of their eigenvalue magnitudes to get basis functions $b_i(\theta)$ with eigenvalues $\lambda_i$. Of course, $b_0$ is the DC component $b_0(\theta)=\frac{1}{\sqrt{2\pi}}$ with eigenvalue 0, etc.

If we now expand a function $f$ in this basis, $f(\theta) = \sum_i \alpha_i b_i(\theta)$, we have $$E(f) = \sum_i \alpha_i^2 (-\lambda_i).$$

In other words, the first few entries in the expansion of $f$, $\sum_{i=0}^N \alpha_i b_i$, contain the "smooth parts" of $f$, the parts that contribute least to $f$'s Dirichlet energy. The more terms you add, the more high-frequency details you recover. In the (very common) case that you must approximate a function using only a limited amount of information, and the coarse, smooth behavior of $f$ is most important to preserve, the Fourier basis thus gives you a natural representation for doing so.

The above picture generalizes directly to other settings, such as on the sphere (where the spherical harmonics play the role of the sines and cosines) or other manifolds.

0
On

Your question is partly about History. And the History of how Mathematicians were led to consider orthogonal expansions in trigonometric functions is not a natural one. In fact, Fourier's conjecture that arbitrary mechanical functions could be expanded in a trigonometric series was not believed by other Mathematicians at the time; the controversy concerning this issue led to banning Fourier's original work from publication for more than a decade.

The idea behind trigonometric expansions grew out of looking at the wave equation for the displacements of a string: $$ \frac{\partial^2 u}{\partial t^2}=c^2\frac{\partial^2 u}{\partial x^2}. $$ In 1715, B. Taylor, concluded that for any integer $n\ge 1$, the function $$ u_n(x,t)=\sin\frac{n\pi x}{a}\cos\frac{n\pi c}{a} (t-\beta_n) $$ represented a standing wave solution. $n=1$ corresponded to the "fundamental" tone, and for $n=2,3,\cdots$, the other solutions were its harmonics (which is where the term Harmonic Analysis first arose in Mathematics.) It was a natural question at the time to ask if a general solution could constructed from a combination of the fundamental mode and the harmonics. If such a general solution were to exist in the form $$ u(x,t) = \sum_{n=1}^{\infty}A_n u_n(x,t), $$ where $A_n$ are constants, then it would be necessary to be able to expand the initial displacement function as $$ u(x,0) = \frac{a_0}{2}+a_1\cos\frac{\pi x}{a}+b_1\sin\frac{\pi x}{a}+\cdots. $$ The consensus at the time was that an arbitrary initial mechanical function could not be expanded in this way because the function on the right would be analytic, while $u(x,0)$ might not be. (This reasoning was not correct, but Mathematics was not very rigorous during that era.) The orthogonality relations used to isolate the coefficients were not discovered for some time after that by Clairaut and Euler.

Fourier decided that such an expansions could be done, and he set out to prove it. Fourier's work was banned from publication for over a decade, which tells us that the idea of expanding in a Fourier series was not a natural one.

Fourier did not come up with the Fourier series, and he did not discover the orthogonality conditions which allowed him to isolate the coefficients in such an expansion. He did, however, come up with the Dirichlet integral for the truncated series, and he did essentially give the Dirichlet integral proof for the convergence of the Fourier Series, though it was falsely credited to Dirichlet. Fourier's work on this expansion became a central focus in Mathematics. And trying to study the convergence of the trigonometric series forced Mathematics to become rigorous.

What Fourier did that was original is to abstract the discrete Fourier series to the Fourier transform and its inverse by arguing that the Fourier transform was the limit of the Fourier series as the period of the fundamental mode tended to infinity. He used this to solve the heat equation on infinite and semi-infinite intervals. Fourier's argument to do this was flawed, but his result was correct. He derived the Fourier cosine transform and its inverse, as well as the sine transform and its inverse, with the correct normalization constants: \begin{align} f & \sim \frac{2}{\pi}\int_{0}^{\infty}\sin(st)\left(\int_{0}^{\infty}\sin(st')f(t')dt'\right)ds \\ f & \sim \frac{2}{\pi}\int_{0}^{\infty}\cos(st)\left(\int_{0}^{\infty}\cos(st')f(t')dt'\right)ds. \end{align} He used these to solve PDEs on semi-infinite domains. The sin's and cos's were eigenfunctions of $\frac{d^2}{dx^2}$ that were obtained using Fourier's separation of variables technique. The term "eigenvalue" grew out of this technique as a way of understanding Fourier's separation parameters.

Based on this story, I would say that it was not a natural idea to expand a function in trigonometric functions. Fourier's work led to notions of linear operators, eigenalues, selfadjoint, symmetric, and general orthogonal expansions in eigenfunctions of a differential operator, but it took over a century for this work to look "natural."

0
On

Sine and cosine functions are eigenfunctions of the second derivative operator with negative eigenvalues, $\frac{d^2}{dx^2} sin(\omega x) = -\omega^2$, while exponetial (and hyperbolic sine/cosine) are eigenfunctions with positive eigenvalues. This makes Laplace transforms (for positive eigenvalues) and Fourier transforms (for negative eigenvalues) very useful for second order differential equations. Force is proportional to second derivative, so sinusoidal functions are the eigenfunctions in a variety of physical situations: harmonic motion, waves, etc.

Sinusoidal functions are in some sense the "simplest" periodic functions.

0
On

Complex exponentials are the Eigenfunctions of linear shift-invariant systems, and a shitload of physical and mathematical problems are at least in first approximation linear shift/time invariant systems. As such their system behavior is equally well described by their impulse response and its Fourier transform, its frequency response.

Convolution with the impulse response corresponds to multiplication of the respective functions in the Fourier transform domain. Convolution also is a fundamentally important operation for probability distributions, with their Fourier transforms being characteristic functions.