Consider the function f(x)=sin(x) evaluated at x=.4. If you use a degree-one (i.e., n=1) Maclaurin polynomial, P(x), to approximate the function, you get an actual error equal to f(.4)-P(.4), or sin(.4)-.4, which means that the absolute value of the error is roughly .3930.
However, when you calculate the maximum absolute value of the (n+1)th derivative of f(x) over [0,.4], which is sin(.4), you cannot use this value as M to calculate the Lagrange error bound. If you try to, you get M(/x-0/^(n+1)) divided by (n+1)!, or (sin.4)(.4^2)/2!, which is approximately .0005585, a value less than the actual error.
How is it possible to get a Lagrange error bound value lesser than the actual error's absolute value? What am I doing wrong?
The calculus of trigonometric functions is always in radians. It's the entire point of using radians; we might as well use degrees for everything else, but as soon as a derivative or an integral is involved, it's time to use radians.
The derivative formula $\frac{d}{dx}\sin x=\cos x$? That relies on radians, as do all the other derivative formulas. If we tried to differentiate $\sin_\circ$, the sine of an angle expressed in degrees, we would get a constant multiple: $\sin_\circ(x)=\sin(\frac{\pi}{180}x)$, so $$\frac{d}{dx}\sin_\circ(x)=\frac{\pi}{180}\cos(\frac{\pi}{180}x)=\frac{\pi}{180}\cos_\circ(x)$$ Repeat that, and we multiply by another $\frac{\pi}{180}$ for each derivative. That's going to seriously throw off a power series.
So, instead, we do all of our calculus with radians - the measure chosen to make that constant multiple in the derivatives just $1$.