A simple question comparing both methods for numerical integration for a very specific case. We expect the Simpsons rule to have a smaller error than the trapezoidal method, but if we want to calculate
$ \int_0^{2\pi}\sin^2x dx $
with $n=5$ equidistant points, we have for the trapezoidal rule (not an efficient code, didactic purposes only):
% MATLAB code
x = linspace(0,2*pi,5); % domain discretization
y = sin(x).^2; % function values
h = x(2)-x(1); % step
w_trapz = [1 2 2 2 1]; % weights for composite trapezoidal rule
w_simps = [1 4 2 4 1]; % weights for composite simpson rule
I_trapz = sum(y.*w_trapz)*h/2; % numerical integration trapezoidal
I_simps = sum(y.*w_simps)*h/3; % numerical integration simpsons
The exact answer for this integral is $\pi$ and we check that:
I_trapz =
3.1416
I_simp =
4.1888
So, for this particular case, the trapezoidal rule was better. What is reason for that?
Note that the error term in the Composite Simpson's rule is
$ \varepsilon=-\frac{b-a}{180}h^4f^{(4)}(\mu) $
for some $\mu\in(a,b)$
while the error term for the Composite Trapezoidal rule is
$ \varepsilon=-\frac{b-a}{12}h^2f^{(2)}(\mu) $
Evaluating the second and forth derivatives of $f(x)=\sin^2(x)$, and noticing $b-a=2\pi$ and $h=\pi/2$, the error term for each of the numerical techniques is:
$ \varepsilon_{Simpson}=-\frac{2\pi}{180}\left(\frac{\pi}{2}\right)^4\left(-8\cos2\mu\right)\\ \varepsilon_{Trapz}=-\frac{2\pi}{12}\left(\frac{\pi}{2}\right)^2\left(2\cos2\mu\right) $
We estimate the maximum error in each approximation by finding the maximum absolute value the error term can obtain. Since in both approximations we have $\cos(2\mu)$ and $\mu\in(0,2\pi)$, then $\max{|\cos(2\mu)|}=1$, and we have
$ \max{\left|\varepsilon_{Simpson}\right|}=\frac{2\pi}{180}\left(\frac{\pi}{2}\right)^4\left(8\right)=\frac{\pi^5}{180}\approx1.70\\ \max{\left|\varepsilon_{Trapz}\right|}=\frac{2\pi}{12}\left(\frac{\pi}{2}\right)^2\left(2\right)=\frac{\pi^3}{12}\approx2.58 $
We see the error term is smaller for the Simpson method than that for the Trapezoidal method. However, in this case, the trapezoidal rule gave the exact result of the integral, while the Simpson rule was off by $\approx1.047$ (about 33% wrong).
Why is that? Does it have to do with the number of points in the discretization, with the function being integrated or is it just a coincidence for this particular case?
We observe that increasing the number of points utilized, both methods perform nearly equal. Can we say that for a small number of points, the trapezoidal method will perform better than the Simpson method?
Old question, but since the right answer hasn't yet been posted...
The real reason for the trapezoidal rule having smaller error than Simpson's rule is that it performs spectacularly when integrating regular periodic functions over a full period. There are $2$ ways to explain this phenomenon: First we can start with $$\begin{align}\int_0^1f(x)dx&=\left.\left(x-\frac12\right)f(x)\right|_0^1-\int_0^1\left(x-\frac12\right)f^{\prime}(x)dx\\ &=\left.B_1(x)f(x)\right|_0^1-\int_0^1B_1(x)f^{\prime}(x)dx\\ &=\frac12\left(f(0)+f(1)\right)-\left.\frac12B_2(x)f^{\prime}(x)\right|_0^1+\frac12\int_0^1B_2(x)f^{\prime\prime}(x)dx\\ &=\frac12\left(f(0)+f(1)\right)-\frac12B_2\left(f^{\prime}(1)-f^{\prime}(0)\right)+\frac12\int_0^1B_2(x)f^{\prime\prime}(x)dx\\ &=\frac12\left(f(0)+f(1)\right)-\sum_{n=2}^{2N}\frac{B_n}{n!}\left(f^{(n-1)}(1)-f^{(n-1)}(0)\right)+\int_0^1\frac{B_{2N}(x)}{(2n)!}f^{(2N)}(x)dx\end{align}$$ Where $B_n(x)$ is the $n^{\text{th}}$ Bernoulli polynomial and $B_n=B_n(1)$ is the $n^{\text{th}}$ Bernoulli number. Since $B_{2n+1}=0$ for $n>0$, we also have $$\begin{align}\int_0^1f(x)dx=\frac12\left(f(0)+f(1)\right)-\sum_{n=1}^{N}\frac{B_{2n}}{(2n)!}\left(f^{(2n-1)}(1)-f^{(2n-1)}(0)\right)+\int_0^1\frac{B_{2N}(x)}{(2n)!}f^{(2N)}(x)dx\end{align}$$ That leads to the trapezoidal rule with correction terms $$\begin{align}\int_a^bf(x)dx&=\sum_{k=1}^m\int_{a+(k-1)h}^{a+kh}f(x)dx\\ &=\frac h2\left(f(a)+f(b)\right)+h\sum_{k=1}^{m-1}f(a+kh)-\sum_{n=1}^N\frac{h^{2n}B_{2n}}{(2n)!}\left(f^{2n-1}(b)-f^{2n-1}(a)\right)\\ &\quad+\int_a^b\frac{h^{2N}B_{2N}(\{x\})}{(2N)!}f^{2N}(x)dx\end{align}$$ Since we are assuming $f(x)$ has period $b-a$ and all of its derivatives are continuous, the correction terms all add up to zero and we are left with $$\begin{align}\int_a^bf(x)dx&=\frac h2\left(f(a)+f(b)\right)+h\sum_{k=1}^{m-1}f(a+kh)+\int_a^b\frac{h^{2N}B_{2N}(\{x\})}{(2N)!}f^{2N}(x)dx\end{align}$$ So the error is $O(h^{2N})$ for some possibly big $N$, the only limitation being that the product of the Bernoulli polynomial and the derivative starts to grow faster than $h^{-2N}$.
The other way to look at it is to consider that $f(x)$, being periodic and regular, can be represented by a Fourier series $$f(x)=\frac{a_0}2+\sum_{n=1}^{\infty}\left(a_n\cos\frac{2\pi n(x-a)}{b-a}+b_n\sin\frac{2\pi n(x-a)}{b-a}\right)$$ Since it's periodic, $f(a)=f(b)$ and the trapezoidal rule computes $$\int_a^bf(x)dx\approx h\sum_{k=0}^{m-1}f(a+kh)$$ Since $\sin\alpha\left(k+\frac12\right)-\sin\alpha\left(k-\frac12\right)=2\cos\alpha k\sin\alpha/2$, if $m$ is not a divisor of $n$, $$\begin{align}\sum_{k=0}^{m-1}\cos\frac{2\pi nkh}{b-a}&=\sum_{k=0}^{m-1}\cos\frac{2\pi nk}m=\sum_{k=0}^{m-1}\frac{\sin\frac{2\pi n}m(k+1/2)-\sin\frac{2\pi n}m(k-1/2)}{2\sin\frac{\pi n}m}\\ &=\frac{\sin\frac{2\pi n}m(m-1/2)-\sin\frac{2\pi n}m(-1/2)}{2\sin\frac{\pi n}m}=0\end{align}$$ If $m$ is a divisor of $n$, then $$\sum_{k=0}^{m-1}\cos\frac{2\pi nkh}{b-a}=\sum_{k=0}^{m-1}\cos\frac{2\pi nk}m=m$$ Since $\cos\alpha\left(k+\frac12\right)-\cos\alpha\left(k-\frac12\right)=-2\sin\alpha k\sin\alpha/2$, if $m$ is not a divisor of $n$, $$\begin{align}\sum_{k=0}^{m-1}\sin\frac{2\pi nkh}{b-a}&=\sum_{k=0}^{m-1}\sin\frac{2\pi nk}m=-\sum_{k=0}^{m-1}\frac{\cos\frac{2\pi n}m(k+1/2)-\cos\frac{2\pi n}m(k-1/2)}{2\sin\frac{\pi n}m}\\ &=-\frac{\cos\frac{2\pi n}m(m-1/2)-\cos\frac{2\pi n}m(-1/2)}{2\sin\frac{\pi n}m}=0\end{align}$$ And even if $m$ is a divisor of $n$n $$\sum_{k=0}^{m-1}\sin\frac{2\pi nkh}{b-a}=\sum_{k=0}^{m-1}\sin\frac{2\pi nk}m=0$$ So the trapezoidal rule produces $$\int_a^bf(x)dx\approx(b-a)\left(\frac{a_0}2+\sum_{n=1}^{\infty}a_{mn}\right)$$ Since the exact answer is $\int_a^bf(x)dx=(b-a)a_0/2$ we see that the effect of the trapezoidal rule in this case is to approximate the function $f(x)$ by its $2n-1$ 'lowest energy' eigenfunctions and integrate the approximation. This is pretty much what Gaussian integration does so it's amusing to compare the two for periodic and nonperiodic functions. The program that produces my data looks like this:
For the periodic function $$\int_0^{2\pi}\frac{dx}{1+e\cos x}=\frac{2\pi}{\sqrt{1-e^2}}=7.2551974569368713$$ For $e=1/2$ we get $$\begin{array}{c|cccc}N&\text{Trapezoidal}&\text{Simpson}&\text{Romberg}&\text{Gauss}\\ \hline 3&8.3775804095727811&9.7738438111682449&9.7738438111682449&8.1148990311586466\\ 5&7.3303828583761836&6.9813170079773172&6.7951485544312549&7.4176821579266701\\ 9&7.2555830332907121&7.2306497582622216&7.2544485033158699&7.2613981883302499\\ 17&7.2551974671820254&7.2550689451457968&7.2568558971905723&7.2552065886284041\\ 33&7.2551974569368731&7.2551974535218227&7.2551741878182652&7.2551974569565632 \end{array}$$ As can be seen the trapezoidal rule is even outperforming Gaussian quadrature, producing an almost exact result with $33$ data points. Simpson's rule is not as good because it averages in a trapezoidal rule approximation that uses fewer data points. Romberg's rule, usually pretty reliable, is even worse than Simpson, and for the same reason. How about $$\int_0^1\frac{dx}{1+x^2}=\frac{\pi}4=0.78539816339744828$$ $$\begin{array}{c|cccc}N&\text{Trapezoidal}&\text{Simpson}&\text{Romberg}&\text{Gauss}\\ \hline 3&0.77500000000000002&0.78333333333333333&0.78333333333333333&0.78526703499079187\\ 5&0.78279411764705875&0.78539215686274499&0.78552941176470581&0.78539815997118823\\ 9&0.78474712362277221&0.78539812561467670&0.78539644594046842&0.78539816339706148\\ 17&0.78523540301034722&0.78539816280620556&0.78539816631942927&0.78539816339744861\\ 33&0.78535747329374361&0.78539816338820911&0.78539816340956103&0.78539816339744795 \end{array}$$ This is a pretty hateful integral because its derivatives grow pretty fast in the interval of integration. Even here Romberg isn't really any better that Simpson and now the trapezoidal rule is lagging far behind but Gaussian quadrature is still doing well. Finally an easy one: $$\int_0^1e^xdx=e-1=1.7182818284590451$$ $$\begin{array}{c|cccc}N&\text{Trapezoidal}&\text{Simpson}&\text{Romberg}&\text{Gauss}\\ \hline 3&1.7539310924648253&1.7188611518765928&1.7188611518765928&1.7182810043725216\\ 5&1.7272219045575166&1.7183188419217472&1.7182826879247577&1.7182818284583916\\ 9&1.7205185921643018&1.7182841546998968&1.7182818287945305&1.7182818284590466\\ 17&1.7188411285799945&1.7182819740518920&1.7182818284590782&1.7182818284590460\\ 33&1.7184216603163276&1.7182818375617721&1.7182818284590460&1.7182818284590444 \end{array}$$ This is the order we expect: Gauss is pretty much exact at $9$ data points, Romberg at $33$, with Simpson's rule and the trapezoidal rule bringing up the rear because they aren't being served the grapefruit of a periodic integrand.
Hope the longish post isn't considered off-topic. Is the plague over yet?