If I use the trapezoidal rule to work out the area under a normal distribution curve from $x=0$ to $x=b$ where $b>0$, surely I expect the error value to begin to decrease past the inflection point (i.e. x = $+\sigma$) as the concavity has begun to change here and therefore trapezii have gone from overestimating on the concave down part of the curve (from x=0 to x=$\sigma$) to underestimating on the concave up part (x > $\sigma$).
I cannot understand why the absolute value of percentage error (and absolute error) begins to decrease a little after this point. I have attached a graph from a simulation I ran in C++, I also checked it using a tedious method in Excel (so I am certain no error in my coding).
Thank you to anyone that can shed light on this bizarre result!
Edit: For this graph $\sigma = 1$ and the number of trapezii used is fixed.
The trapezoidal approximation will underestimate an integral calculated within the domain of concavity of a function, and overestimate an integral within its domain of convexity. This is because where the function is concave the diagonal edge of any trapezoid will be located below the actual arc of the curve, and thus the area of the trapezoid will be less than the area under that arc of the curve. The reverse happens in the convex region (take a look at this gif of the trapezoidal approximation in action for an integral of a curve in its region of convexity).
So if you are integrating the normal curve from $0$ to $b < \sigma$, a trapezoidal approximation return a value that is smaller than the actual value of the integral, accumulating more and more negative error the closer $b$ gets to $\sigma$, up until the upper limit of integration becomes greater than $\sigma$. At that point, the trapezoids will start overestimating and the error will start to get closer to zero as shown in your graph.