I was tasked with the following:
Letting $h=(b-a)/(m+1)$ and $x_j=a+jh$ where $j=0,1,\dots,m,m+1$, the composite trapezoid rule says \begin{equation} \int^{b}_a\approx h\sum^{m}_{j=0}\frac{g(x_j)+g(x_{j+1})}{2}\,. \end{equation} Assuming $g$ is smooth enough, show that the error in the approximation is $\mathcal{O}(h^2)$.
I proceeded as follows:
Noting that $x_j=a+hj$ and $x_{j+1}=a+h(j+1)$, we can write the Taylor expansions for any subinterval as follows, expanding about $x_j$ and $x_{j+1}$ respectively: \begin{equation*} g(x)=g(a+h j)+(-a-h j+x) g'(a+h j)+\mathcal{O}\left((-a-h j+x)^2\right) \end{equation*} and \begin{equation*} g(x)=g(a+h(j+1))+(-a-h (j+1)+x) g'(a+h j+h)+\mathcal{O}\left((-a-h (j+1)+x)^2\right)\,. \end{equation*} For the $x_i$ case, we have \begin{equation*} \lim_{h\to\infty}\Bigg|\frac{(-a-h j+x)^2}{h^2}\Bigg|=\lim_{h\to\infty}\Bigg|\frac{a^2}{h^2}-\frac{2 a x}{h^2}+\frac{2 a j}{h}+\frac{x^2}{h^2}-\frac{2 j x}{h}+j^2\Bigg|=j^2\,, \end{equation*} and by inspection it is easy to see that any higher powers of $h$ in the denominator send the limit to zero and lower powers send it to infinity. The same is true for the $x_{i+1}$ case: \begin{align*} \lim_{h\to\infty}\Bigg|\frac{-a-h (j+1)+x)^2}{h^2}\Bigg|&=\lim_{h\to\infty}\Bigg|\frac{a^2}{h^2}-\frac{2 a x}{h^2}+\frac{2 a j}{h}+\frac{2 a}{h}+\frac{x^2}{h^2}-\frac{2 j x}{h}-\frac{2 x}{h}+j^2+2 j+1\Bigg|\\ &=j^2+2j+1=(j+1)^2\,. \end{align*} Because $j$ is a constant for each subinterval, we thus know that the error term in either expansion is $\mathcal{O}(h^2)$ and therefore the error in their addition and division by 2 is $\mathcal{O}(h^2)$. However, each term in the summation is multiplied by $h$, leaving us with $\mathcal{O}(h^3)$. But when we take the sum over all the subintervals, we multiply by $\frac{b-a}{h}$, bringing us back to $\mathcal{O}(h^2)$.
Is this a valid solution? I have not seen the limit definition for big-O used much and wonder if this is a good application for it. The TA says that taking $h$ to infinity is improper because it represents the grid spacing, but can't we just adjust $b$ and $a$ to make that happen?
Note: I understand that a proof like Ian's is generally the way this is done. I'm specifically seeking comments about my approach here.
This looks a bit long to me for a proof like this. First of all, it is enough to just look at one subinterval, so you are comparing $\int_a^b f(x) dx$ and $\frac{f(a)+f(b)}{2}$. Then just sum up the errors, being careful to remember that the number of summands depends on $h$. For simplicity then let me denote the interval by $(0,h)$.
As for the error on one subinterval, recall that the trapezoidal rule exactly integrates linear functions. Thus you are really dealing with
$$\int_0^h f(x)-\left ( f(0) + \frac{f(h)-f(0)}{h} x \right ) dx.$$
Now write $f(x)=f(0)+f'(0)x+O(h^2)$ to get
$$\int_0^h f(0)-f(0)+f'(0)x+\frac{f(h)-f(0)}{h} x + O(h^2) dx = \int_0^h f'(0) x + \frac{f(h)-f(0)}{h} x dx + O(h^3).$$
Now finish by showing that $\frac{f(h)-f(0)}{h}-f'(0)=O(h)$ and using the simple fact that $\int_0^h x dx = h^2/2$.
This technique of writing the quadrature rule as the exact integral of a "nearby" function is very widely applicable, and similar things show up in other areas of numerical analysis too.