The rate of convergence of the Fourier series near the discontinuity of a function

1.6k Views Asked by At

I'm trying to understand how bad the Gibbs phenomenon is. Say I have for instance a square wave on $[0, 1]$, and I want to approximate it with a Fourier series. How many terms would I need to approximate $x=\delta$ within $\epsilon$?

In general, is there a big-O expression for how many terms would be needed to overcome the Gibbs phenomenon at a discontinuity?

I'm aware that the convergence is not uniform. I'm curious about the pointwise convergence of a point $\delta$ away from a discontinuity, particularly how the number of terms $n$ needed to be within $\epsilon$ varies with $\delta$.

2

There are 2 best solutions below

1
On BEST ANSWER

A typical way to estimate the error is to write the partial sum as the convolution of the function with the Dirichlet kernel, and estimate that integral. Here we consider the $2$-periodic function $f$ which agrees with $\operatorname{sign}x$ on $(-1,1)$. The Dirichlet kernel is $D_n(t)=\sin((n+1/2)\pi t)/\sin(\pi t/2)$, and the partial sum $s_n$ is related to it by $$s_n(x) = \frac12 \int_{-1}^1 f(x-t)D_n(t)\,dt$$ Subtract $f(x)$, using the fact that $\frac12 \int_{-1}^1 D_n(t)\,dt=1$. $$s_n(x)-f(x) = \frac12 \int_{-1}^1 (f(x-t)-f(x)) D_n(t)\,dt$$ Here $x$ is your $\delta$, a fixed positive number. As a function of $t$, the difference $f(x-t)-f(x)$ is equal to $-2$ on $(-1,x-1)\cup (x,1)$ and $0$ elsewhere (I don't care which way the endpoints go). So, $$s_n(x)-f(x) = -\int_{-1}^{x-1} D_n(t)\,dt - \int_{x}^{1} D_n(t)\,dt $$ Both integrals are estimated in the same way, since $D_n$ is an even function. Integrate by parts: $$\int_{x}^{1} D_n(t)\,dt = \left[-\frac{\cos((n+1/2)\pi t)}{\pi(n+1/2)}\frac{1}{\sin(\pi t/2)}\right]\bigg|_{t=x}^{t=1} \\ - \frac{1}{2n+1}\int_x^1 \frac{\cos((n+1/2)\pi t)\cos(\pi t/2)}{\sin(\pi t/2)^2}\,dt$$ Estimate cosines from above by $1$ (except $\cos((n+1/2)\pi )=0$), the sine from below as $\sin(\pi t/2)\ge t$: $$\left|\int_{x}^{1} D_n(t)\,dt\right| \le \frac{1}{\pi(n+1/2)} \frac{1}{x} + \frac{1}{2n+1}\int_x^1 \frac{1}{t^2}\,dt \\ = \frac{1}{\pi(n+1/2)}\frac{1}{x} + \frac{1}{2n+1}\left(\frac{1}{x}-1\right)$$ We also have the other integral, with $1-x$ instead of $x$. Add the error bounds: $$|s_n(x)-f(x)|\le \frac{1}{\pi(n+1/2)}\left[ \frac{1}{x(1-x)} + \frac{\pi}{2}\left(\frac{1}{x(1-x)}-2\right)\right]$$ So, you are guaranteed to be within $\epsilon$ at $x=\delta$, if $$ n\ge \frac{1}{\epsilon}\left[ \frac{1}{\delta(1-\delta)} + \frac{\pi}{2}\left(\frac{1}{\delta(1-\delta)}-2\right)\right]-\frac12 $$

2
On

Gibbs' phenomenon is the fact that there is NO number of terms large enough to guarantee that the distance from the partial sum to the square wave is less than $\delta$: for any partial sum, there's always a value around 1.09, I believe (assuming a square wave that goes from $-1$ to $1$). The point is that the location of this "overshoot" moves closer and closer to the discontinuity, so that you get pointwise convergence, but not $L^\infty$ convergence.

Dym and McKean's book on Fourier Series and Integrals explains this in some detail, but leaves many things to the reader -- it's not for the faint of heart.

I seem to recall some story about some famous mathematician or physicist buying a "harmonic analyzer" (a mechanical Fourier-series finder) and complaining that the machine didn't work because it didn't really converge to a square wave, and Gibbs perhaps resolving this by explaining that pointwise and $L^\infty$ convergence were distinct notions.

Post-comment addition: I suspect that messing with the convolution-multiplication theorem might get you a reasonable estimate of the rate of convergence.

From some rough scratchwork based on that idea, my suspicion is that the $n$th partial sum at any point differs from the true value by some error $E(n)$, and that $|E(n)|$ ends up bounded by a constant times $\frac{1}{n}$, with the constant depending on the point at which you're examining the convergence.