What Motivated the Definition of the Orthogonality of Functions?

251 Views Asked by At

I am curious to know why the orthogonality of two (real) functions $f(x)$, $g(x)$ is given by:

$$\int_{-L}^{L} f(x) \,g(x) \; \text{d}x = 0$$

I see a kind of similarity between this definition and the orthogonality of vectors $\vec{v}$, $\vec{w}$ $\in$ $\mathbb{R}^n$, $\,$ viz. $\vec{v} \cdot \vec{w} = v_i \, w_i = 0$. It even makes sense to me that the domain of integration should play an important role in this result. However, I'm at a loss to imagine

a) the context that would've prompted such an extension;

b) the meaning of orthogonality (i.e. is there any way of thinking of this that is as intuitive as the geometric orthogonality of the vector version, where we can intuitively understand the meaning of orthogonality for vectors in $\mathbb{R}^2$ and $\mathbb{R}^3$ and extend the concept to higher dimensions?).

Perhaps the most concise way of asking my question would be is there an alternative way of viewing the definition of orthogonality of functions that is analogous to the geometric definition of the vector dot product (i.e. $\vec{v} \cdot \vec{w} = |\vec{v}|\, |\vec{w}| \cos\theta$)?

I looked at this question, but it doesn't really get at what I'm after.

1

There are 1 best solutions below

2
On

Yes, it's no coincidence that vectors are called orthogonal if their dot product is zero, and the functions you're considering are called orthogonal if the integral of their product is zero. Both of them are examples of an inner product on a vector space.

If $V$ is a vector space over $\mathbb{R}$, an inner product on $V$ is a map $(\_,\_) \colon V \times V \to \mathbb{R}$ which is

  • symmetric: $(v,w) = (w,v)$ for all $v$ and $w$ in $V$
  • bilinear: $(c_1 v_1 + c_2 v_2,w) = c_1 (v_1,w) + c_2 (v_2,w)$ and $(v,c_1w_1 + c_2 w_2) = c_1(v,w_1) + c_2(v,w_2)$ for all $v, w, v_1, v_2, w_1, w_2$ in $V$ and $c_1,c_2$ in $\mathbb{R}$
  • positive-definite: $(v,v) \geq 0$ for all $v \in V$. Furthermore, $(v,v) = 0 \iff v=0$.

If $V$ has an inner product, we can call vectors orthogonal if their inner product is zero.

In linear algebra, you often try to diagonalize a linear operator. That makes it very easy to study because you just have to look at scalar multiples. When the vector space has an inner product and the operator is normal, then it's diagonalizable with an orthogonal basis.

I guess what motivates the extension of inner product to infinite dimensional vector spaces is that there are important linear operators on those spaces. For instance, the derivative. With the inner product you mentioned, and assuming the values are the same at $\pm L$, the derivative is skew-adjoint, in particular normal.

As for a geometric motivation for this inner product, or the associated orthogonality, you know that an integral is a limit of Riemann sums over finer and finer partitions. For each $n$, we can set $\Delta x_n = \frac{2L}{n}$, $x_0 = -L$, $x_1 = -L + \Delta x$, $x_2 = -L + 2 \Delta x$, et cetera, until $x_n = L$. Then $$ \int_{-L}^L f(x)g(x)\,dx = \lim_{n\to\infty} \sum_{i=1}^n f(x_i)g(x_i)\Delta x $$ In other words, this integral inner product is a limit of finite dimensional inner products formed by sampling $f$ and $g$ at a regular set of points.