Arbitrary initial function on heat equation

75 Views Asked by At

I'm currently going through my professor's notes on the 1-D heat equation and I ran into a bit of a weird thing. Suppose we set out to solve the heat equation $$ \partial _t \phi - \alpha \nabla ^2\phi =0 $$ where $\phi = \phi(x,t)$ over $(0,\pi) \times (0, \infty)$, given that $$ \partial_x \phi(0,t) = \partial _x \phi(\pi, t) = 0. $$

This is the popular 1-D homogenous bar with insulated ends. Now, in his notes he proceeds to use separation of variables to find the family of eigenfunctions to $$ L = \partial _t - \alpha \nabla ^2 $$ and we arrive at a proposed solution in terms of a cosine fourier series, $$ \phi (x,t) = c_0 + \sum _{n=1}^{\infty}c_n e^{-\alpha n^2 t} \cos(nx). $$

However, this is where things get fishy for me. He claims that with this generic solution we can solve the $c_n$ coefficients for any initial condition $$ u(x,0) = g(x). $$ This is very strange to me. How is it possible to accurately approximate an arbitrary function over $(0,\pi)$ using only cosines? How can we say for sure that using cosines only will yield a sum that will converge to $g(x)$ in the $L^2$ sense?

2

There are 2 best solutions below

3
On

This is a trick in Fourier analysis that can look confusing at first, but not too bad once you get the hang of it.

Start with \begin{equation} \phi(x,t) = c_0 + \sum_{n=1}^\infty c_n e^{-\alpha n^2t} \cos(nx) \end{equation} Then, \begin{equation} \phi(x,0) = g(x) = c_0 + \sum_{n=1}^\infty c_n\cos(nx) \end{equation} Note that \begin{equation} \int_{0}^{\pi} \cos(mx)\cos(nx)dx = \ \begin{cases} 0 & \text{if $n \neq m$} \\ \frac{\pi}{2} & \text{if $n=m$} \\ \end{cases} \ \end{equation}

Now, to exploit this orthogonality property, we multiply both sides by $\cos(mx)$ and integrate from $0$ to $\pi$.

We get,

\begin{equation} \ \int_{0}^\pi g(x) \cos(mx) dx = \int_{0}^\pi (c_0\cos(mx) + c_n\sum_{n=1}^{\infty}\cos(nx)\cos(mx) ) dx \ \end{equation}

The first integral is $0$ because $\sin(m\pi) = 0$, \begin{equation} \ \int_{0}^\pi g(x) \cos(mx) dx = c_n \int_{0}^\pi \sum_{n=1}^{\infty}\cos(nx)\cos(mx) dx \ \end{equation}

The tricky step here is to switch the integral and the sum. This is justified by the Fubini-Tonelli theorems and require an understanding of real analysis so I won't go over the full justification here. If you're interested, here's a reference to read for a full justification: When can a sum and integral be interchanged?.

Interchanging the sum and the integral, we get: \begin{equation} \ \int_{0}^\pi g(x) \cos(mx) dx = c_n \sum_{n=1}^{\infty} \int_{0}^\pi \cos(nx)\cos(mx) dx \ \end{equation}

Now, we can finally exploit the orthogonality of the cosine function. When $n=m$, the integral is $\pi/2$, and when $n \neq m$, the integral is $0$. Therefore, the only non-zero term in the summation is $\pi/2$ when $n$ and $m$ happen to align.

Finally, this leads us to the formula for the coefficients:

(Note that since n was just a summation index, once it's out of the summation, we can give the coefficients any name so we can easily change it to m to fit the sines) \begin{equation} \ c_m = \frac{2}{\pi} \int_{0}^\pi g(x) \cos(mx) dx \ \end{equation}

However, this is a definite integral so as Hans Lundmark pointed out, this only works for some functions $g(x)$, specifically those that are "nice enough" for this integral to converge.

In addition, $g(x) = \phi(x,0)$ so it must satisfy the boundary conditions, which forces $g(x)$ to be able to be approximated with cosines just like $\phi(x,t)$.

0
On

After some thought, I figured out what makes possible to approximate any sufficiently nice function $g(x)$ with cosines in this context.

The thing is that, while the cosines are $2\pi$-periodic, the solution we're seeking must only be defined in $(0,\pi)$, and this allows "one half" of the cosines to approximate, as if after $x=\pi$ we had a reflected version of $g(x)$.

For example: assume for the sake of argument that instead of solving in $(0,\pi)$ we set out to solve in $(-\pi,0)$, and $g(x)$ is defined over this interval. Our family of eigenfunctions would remain the same, and all we have left is to compute the coefficients. According to the theory on Fourier series, we can find a suitable inner product over which our family of eigenfunctions are orthogonal. Say we choose the usual $$ \langle f,g \rangle = \int _{-\pi}^{\pi} f(x) \overline{g(x)} \, dx $$ and have our coefficients be given by $$ c_0 = \frac{1}{2\pi} \int _{-\pi}^{\pi} g(x) \, dx \qquad c_n = \frac{1}{\pi} \int _{-\pi}^{\pi} g(x) \cos (nx) \, dx. $$ But this gives rise to the question of how to define an extension $g ^{\dagger} (x)$ over the full domain of integration, $(-\pi, \pi)$, such that the resulting function actually converges to $g(x)$. In general, if we choose to set $g ^{\dagger} (x)$ to the $2\pi$-periodic extension of $g(x)$, we may very well come up with situations where $c_n = 0, \, n \geq 1$; e.g. the best approximation is a constant function. However, if we choose $$ g^{\dagger} (x) = \left\lbrace \begin{array}[r r] -g(x),& -\pi < x < 0; \\ g(-x),& 0 \leq x < \pi \end{array} \right. $$ we obtain an even function over $(-\pi, \pi)$, which is guaranteed to require only even terms (and a constant) to be approximated, that satisfies $$ \int _{-\pi}^{0} g(x) \cos (nx) \, dx = \frac{1}{2} \int_{-\pi}^{-\pi} g^{\dagger} (x) \cos (nx) \, dx. $$ Hence, by approximating $g ^{\dagger} (x)$ (the reflected extension of $g$, if you will), we obtain the correct coefficients such that $$ u(x,0) = g(x); $$ this means that the correct $c_n$ will be given by $$ c_0 = \frac{1}{2\pi} \int_{-\pi}^{\pi} g ^{\dagger} (x) \, dx = \frac{1}{\pi} \int_{-\pi}^{0} g(x) \, dx $$ and $$ c_n = \frac{1}{\pi} \int _{-\pi}^{\pi} g ^{\dagger}(x) \cos(nx) \, dx = \frac{2}{\pi} \int _{-\pi}^{0} g(x) \cos (nx) \, dx. $$ The convergence to $g(x)$ over $(-\pi, 0)$ is implied by the convergece to $g ^{\dagger} (x)$ over $(-\pi, \pi)$.