One Function Expanding into Another Function?

400 Views Asked by At

How can one expand the function $f_1(x) = x$ on $(−π, π)$in terms of the functions $\cos nx, n = 0, 1, 2, ...$ and $\sin nx, n = 1, 2, ...,$ I read the below function - $$f_2(x) = a_0 + \sum_{n=1}^{\infty}(a_n \cos nx + b_n \sin nx)$$

is the expansion of the $f_1$, how is that?

Main Question:

Is $f_1$ equivalent to $f_2$ in some way? If so, plz provide a detail proof which is easily comprehensible.

I have attached the excerpt below -

enter image description here

2

There are 2 best solutions below

3
On

As already mentioned in the comments, this is known as a Fourier series. The set of square-integrable functions forms an infinite dimensional vector space called $L^2$-space:

$$L^2[-\pi, +\pi] =\bigg\{f\colon[-\pi, +\pi]\to\mathbb R \biggm\vert \int\limits_{-\pi}^{+\pi} \!\! |f(x)|^2 dx < \infty \bigg\}$$

In fact, it is even a Hilbert space with the inner product

$$ \langle f, g\rangle_{L^2[-\pi, +\pi]} = \int\limits_{-\pi}^{+\pi} \!\!f(x)g(x) d x$$

which induces the norm $\|f\|_{L^2[-\pi, +\pi]}^2 = \langle f, f\rangle_{L^2[-\pi, +\pi]}$. The above is not 100% correct, there is a technicality involved that needs to be accounted for: one needs to "mod out" functions that are equal w.r.t. the $L^2$-norm. For example, consider the functions $f(x) = 0$ and $g(x) = \begin{cases}1:x=0\\0:x\neq 0\end{cases}$. Here $\|f-g\|=0$, since the functions only differ on a set of measure zero. However, for $L^2$ to be a proper normed vector-space, we need $\|h\|=0 \iff h=0$. Therefore, we need to identify functions with each other whenever their $L^2$ distance is zero. (Alternatively, one can use the completion of the space of square integrable continuous functions)

With this in mind, we can do Linear Algebra in $L^2$, in particular we can construct orthogonal basis (which are sometimes called Schauder basis in the case of infinite dimensional vector spaces) and express functions in terms of these bases. And this is exactly what the Fourier Series does! The functions $f_1$ and $f_2$ are then equivalent in the $L^2$ sense:

$$ \lim_{N\to\infty}\bigg\|f_1(x) - \sum_{n=1}^N \big(a_n \cos(nx) + b_n\sin(nx)\big)\bigg\| = 0$$

12
On

@Hyperplane 's answer already does a great job of explaining what this is all about, but just in case you're not comfortable with all of the "infinite dimensional Hilbert space" and "scalar product" stuff, I'll relate it to something a bit more concrete. Let's talk about vectors in the Hilbert space $\mathbb{R}^3$. Suppose we have a vector $\mathbf{v}$ with $x$ component $a$, $y$ component $b$, and $z$ component $c$. Now, we could express this as $\mathbf{v}=(a,b,c)$, but there is a potentially more informative way of expressing it, which we write as $$\mathbf{v}=a\hat{\mathbf{i}}+b\hat{\mathbf{j}}+c\hat{\mathbf{k}}$$ This might seem really basic, but what exactly is going on here? This is surprisingly deep: I've chosen a basis for the vector space $\mathbb{R}^3$ which I can use to express any vector as a sum of the basis vectors.

But we can express $\mathbf{v}$ in yet another way still! Notice that $\mathbf{v}\boldsymbol{\cdot}\hat{\mathbf{i}}=a$, $\mathbf{v}\boldsymbol{\cdot}\hat{\mathbf{j}}=b$, and $\mathbf{v}\boldsymbol{\cdot}\hat{\mathbf{k}}=c$. So we can write $\mathbf{v}$ as $(\mathbf{v}\boldsymbol{\cdot}\hat{\mathbf{i}},\mathbf{v}\boldsymbol{\cdot}\hat{\mathbf{j}},\mathbf{v}\boldsymbol{\cdot}\hat{\mathbf{k}})$. Why does this work? Well, if we want to know what the $x$ component of $\mathbf{v}$ is, we would measure how much it points in the $x$ direction, or how similar $\mathbf{v}$ is to the $x$ unit vector, $\hat{\mathbf{i}}$, and the tool we use to measure this is known as the dot product.

It's also easy to see how we can extend this to any finite number of dimensions. If $\mathbf{v}=(a_1,...,a_n)$ we can also express it as $$\mathbf{v}=\sum_{i=1}^{n}a_i\hat{\mathbf{e}_i}=((\mathbf{v}\boldsymbol{\cdot}\hat{\mathbf{e}_1}),...,(\mathbf{v}\boldsymbol{\cdot}\hat{\mathbf{e}_n}))=\sum_{i=1}^{n}(\mathbf{v}\boldsymbol{\cdot}\hat{\mathbf{e}_i})\hat{\mathbf{e}_i}$$ Or, with some care, to an infinite number of dimensions. If $\mathbf{v}=(a_1,a_2,...)$ $$\mathbf{v}=\sum_{i=1}^{\infty}a_i\hat{\mathbf{e}_i}=((\mathbf{v}\boldsymbol{\cdot}\hat{\mathbf{e}_1}),(\mathbf{v}\boldsymbol{\cdot}\hat{\mathbf{e}_2}),...)=\sum_{i=1}^\infty (\mathbf{v}\boldsymbol{\cdot}\hat{\mathbf{e}_i})\hat{\mathbf{e}_i}$$ Provided $\mathbf{v}$ is of finite size, i.e $\Vert\mathbf{v}\Vert^2=\mathbf{v}\boldsymbol{\cdot}\mathbf{v}=\sum_{i=1}^\infty {a_i}^2$ is finite.

So how can we use these ideas when talking about the Hilbert space of continuous, square integrable functions, also known as $L^2$ functions on the interval $[-\pi,\pi]$? Here, instead of a vector $\mathbf{v}$, we have an $L^2$ function $f$. Instead of the dot product $\_\boldsymbol{\cdot}\_$, we have the scalar or inner product $\langle\_,\_\rangle$, defined as $$\langle f,g \rangle=\frac{1}{2\pi}\int_{-\pi}^{\pi}f(x)g(x)\mathrm{d}x$$ As mentioned by @Hyperplane , in order for us to be able to deal with the function $f$ we need to check that $$\langle f,f\rangle =\frac{1}{2\pi}\int_{-\pi}^\pi f(x)^2 \mathrm{d}x$$ Is finite.

Finally, instead of the basis vectors $\hat{\mathbf{e}_1},\hat{\mathbf{e}_2},\hat{\mathbf{e}_3},...$ we have the following basis functions: The cosine functions $\cos(x)$, $\cos(2x),...$ and the sine functions $\sin(x)$, $\sin(2x)...$ , and finally the constant function $1$. Therefore, we can represent $f$ as a sum of all of these basis functions. And in order to work out the coefficients in the sum, we use the inner product we mentioned before-we measure how similar $f$ is to the constant function and to the cosine and sine functions mentioned. Thus, we can write $$f(x)=\langle f(x),1\rangle +2\sum_{n=1}^\infty \bigg[ \left\langle f(x),\cos(nx)\right\rangle \cos(nx)\bigg]+2\sum_{n=1}^\infty \bigg[ \left\langle f(x),\sin(nx)\right\rangle \sin(nx)\bigg]$$ Well, the $2$s in front of the sums are a little difficult to explain, but I hope you get the idea. The coefficients $2\left\langle f,\cos(nx)\right\rangle$ and $2\left\langle f,\sin(nx)\right\rangle$ are often given the names $a_n$ and $b_n$, and $\langle f,1\rangle$ is often given the name $\frac{a_0}{2}$, therefore we can write $$f(x)=\frac{a_0}{2}+\sum_{n=1}^\infty \bigg[a_n\cos(nx)+b_n\sin(nx)\bigg].$$

EDIT: I just wanted to add to my answer to make it somewhat more complete. The reason that there is a 2 in front of the inner product sums of sine and cosine is because while $\sin(nx)$ and $\cos(nx)$ are indeed a basis of $L^2$ functions on $[-\pi,\pi]$, they are not an orthonormal basis, that is $$\frac{1}{2\pi}\int_{-\pi}^\pi\sin^2(nx)\mathrm{d}x=\frac{1}{2\pi}\int_{-\pi}^\pi\cos^2(nx)\mathrm{d}x=\frac{1}{2}\neq 1$$ So we should really be using the functions $2\sin(nx),2\cos(nx)$. So we can write $$a_n=\langle f(x),2\cos(nx)\rangle=\frac{1}{2\pi}\int_{-\pi}^\pi f(x)\cdot2\cos(nx)\mathrm{d}x=\frac{1}{\pi}\int_{-\pi}^\pi f(x)\cos(nx)\mathrm{d}x$$ And similarly for $b_n$. We then write $\frac{a_0}{2}$ to stay consistent with the definition of $a_n$ above. Notice $$a_0=\frac{1}{\pi}\int_{-\pi}^\pi f(x)\cos(0\cdot x)\mathrm{d}x=\frac{1}{\pi}\int_{-\pi}^\pi f(x)\mathrm{d}x$$ However, the coefficient in front of the constant function $1$ should be equal to $$\langle f(x),1\rangle=\frac{1}{2\pi}\int_{-\pi}^\pi f(x)\mathrm{d}x=\frac{a_0}{2}$$