Suppose we want to integrate $$I(f) := \int_{-1}^1{f(x)\over\sqrt{1-x^2}}\,dx.$$ If I have some quadrature formula given by $$Q_2(f) = {\pi\over 2}\sum_{i=1}^2f(x_i),$$ I want to put an upper bound on the error given by $$\begin{align*}|E_2(f)| &= |I(f) - Q_2(f)|,\end{align*}$$ where $x_i = \pm{\sqrt2\over2}$. I have shown that if $f$ is a cubic polynomial, then the error is zero. However, when $f$ is any (differentiable) function in general, I'm not entirely sure what can be done. I know that we can do the following: $$\begin{align*}I(f) &= \int_{-1}^1{f(x)\over\sqrt{1-x^2}}\,dx\\ &= \int_0^\pi f(\cos\theta)\,d\theta \tag{Let $x=\cos\theta$}\\ &\approx \sum_{i=1}^n\alpha_if(x_i),\end{align*}$$
which $$\begin{align*}|E_2(f)| &= |I(f) - Q_2(f)|\\ &= \left|\int_0^\pi f(\cos\theta)\,d\theta - \sum_{i=1}^2{\pi\over2}f(x_i)\right|.\end{align*}$$ but I'm not sure where I can go from here to start talking about the error bound. I think we could use say the Lagrange interpolating polynomial for $f$, but the different indices seem like this isn't what we should be doing.
I don't think that it is possible to bound the error purely from the differentiability of $f$. You need a stronger condition.
I can choose a function such that $f\left(\pm \frac{1}{\sqrt{2}}\right) = 0$ so that $Q_2(f) = 0$ e.g. $$ f(x) = a\cos\left(\frac{\pi}{\sqrt{2}} x\right). $$
Now since the integral is non-zero and proportional to $a$ I can make the error as big as I like by increasing $a$.
What you could do is use a Taylor expansion for $f$ to compute the error and take the leading term as an estimate of the error and then use that to bound the result in terms of say $\max_x{|f''(x)}|$ or something like that.