(Motivation behind) Orthogonality of functions

218 Views Asked by At

I'm interested in understanding the usual inner product on functions spaces more deeply than I already do. That is, the inner product $\int f(t) \;g^*(t) \;dt$, where $f$ and $g$ are complex valued functions over whatever domain.

In my research so far, I've seen analogies drawn between this inner product, and how it is essentially like a dot product in $C^n$. For example, we have $(v_1, v_2, ..., v_n) \cdot (u_1, u_2, ..., u_n) = \sum_{i=1}^n v_i u_i ^ *$, and this is a bit like considering $f(x) g(x)^*$ for each $x$, and then integrating over the domain.

I find this (so far) unsatisfactory for a number of reasons. Firstly, it is not clear in what sense the image of each point in the domain represents a 'dimension' of the function space. Secondly, it isn't clear that what works in finite dimensions can naturally be extended to infinite dimensions. I understand that the above is only really meant to be an analogy rather than some proper argument, but I don't find analogies very helpful unless the situations actually are analogous.

Essentially, I would like to see a motivation for this inner product on function spaces. Why would somebody come up with it, if they had never seen it before? I know that $\sin(x)$ and $\cos(x)$ are orthogonal, but only because their inner product as above is zero. I'm convinced that, without recourse to this integral, there is still a sense in which $\sin(x)$ and $\cos(x)$ are orthogonal, which would have naturally led to the construction of this inner product. I'm interested in finding this out.

Would anyone be able to provide some insight towards what I've discussed here?

2

There are 2 best solutions below

5
On

Your third paragraph rejects many of the usual analogies. Let me try another. The orthogonality of the functions $\sin nx$ and $\cos mx$ is precisely what allows you to expand a function as a Fourier series - a sum of sines and cosines with various amplitudes, just as you express an arbitrary vector in $n$-space as a linear combination of basis vectors.

Fourier came up with this idea in his study of partial differential equations, although he did not have our modern terminology to describe it.

The picture is even clearer for complex function space, where you use the exponentials $e^{inx}$ for $n \in \mathbb{Z}$ instead of the sines and cosines. (Euler's formula connects the two bases.)

0
On

The "orthogonality" of the trigonometric functions was discovered well before an inner product was defined. It was discovered in the process of trying to solve the wave equation for a thin wire. In 1750 Bernoulli proposed a general description of the motion of a thin wire stretched between $x=0$ and $x=a$ as a displacement from rest $u$ given by $$ u(x,t)=\sum_{n=1}^{\infty}a_n\sin\frac{n\pi x}{a}\cos\frac{n\pi c}{a}(t-\beta_n) $$ for suitable values of $a_n$ and $\beta_n$. (Here $c$ is a constant determined by properties of the wire or string.) In 1753 Euler noticed that, if Bernoulli's conjecture were to be true, then the condition at $t=0$ would imply that any "mechanical" function $f$ describing the initial displacement of the string on $[0,a]$ would necessarily have to be described as $$ f(x)=\frac{1}{2}a_0+ (a_1\cos\frac{\pi x}{a}+b_1\sin \frac{\pi x}{a})+(a_2\cos\frac{2\pi x}{a}+b_2\sin\frac{2\pi x}{a})+\cdots. $$ Euler's opinion was this could not happen for a general mechanical function $f$, and most Mathematicians agreed. Remarkably, Euler and Clairaut discovered what the coefficients would have to be in order to have such a representation of $f$; they discovered that if you select any two different functions from $$ 1,\cos\frac{\pi x}{a},\sin\frac{\pi x}{a},\cos\frac{2\pi x}{a},\sin\frac{2\pi x}{a},\cdots $$ and integrate their product over $[0,a]$, you would obtain $0$. So they reasoned that if $$ f(x) = a_0+a_1\cos\frac{\pi x}{a}+b_1\sin\frac{\pi x}{a}+a_2\cos\frac{2\pi x}{a}+b_2\sin\frac{2\pi x}{a}+\cdots, $$ then you could multiply by one of these functions, integrate over $[0,a]$ and determine the corresponding coefficient $a_n$ or $b_n$. This is where the idea of "orthogonality" of functions first arose. Euler and Clairaut discovered these remarkable relations. However, they did not believe that a general mechanical displacement function $f$ could be expressed in this way; they felt that such an expansion put a constraint on the type of function $f$ that could be used if one wanted to solve the wave equation using Bernoulli's method. Fourier believed that every mechanical function $f$ could be expanded in this way. Fourier was correct, and that's why the coefficients are named after Fourier, and not after Euler and Clairaut who actually discovered the relations and expressions for the coefficients.

Fourier continued to study PDEs and he discovered other such orthogonality relations. He showed that orthogonality of this type was far more common than one might think. And a great deal of effort was expended over the next century in trying to understand why orthogonal expansions of this type worked. This eventually led to a general framework proposed by John von Neumann and his advisor David Hilbert, now known as Hilbert space. This led to thinking of functions as points in an abstract space with geometry through an inner product. From this, the metric space arose, and then general topology. But it all started with the vibrating string and Fourier's study of separation of variables and general function expansions.

The most confounding part of this History is that it came before even the Cauchy-Schwarz inequality for finite-dimensional spaces. Orthogonality of functions and an integral inner came before the study of finite-dimensional inner products. Infinite-dimensional function expansions, eigenfunction expansions, and symmetric operators came before a general treatment of finite-dimensional inner product spaces. The most abstract settings were studied before their concrete finite-dimensional counterparts. The motivation for the finite-dimensional came from infinite-dimensional function spaces arising out of solving PDEs. So, looking for motivation for the infinite-dimensional from the finite-dimensional is backwards.