I recently started doing practice problems on Lagrange errors for Taylor polynomial approximation. I got the following question:
Let the taylor polynomial be evaluated at x = 0 and the interval be closed from 0 to 1.
Can I always assume the remaining Lagrange error to be the largest at whatever x-value is ''farthest'' away from my evaluation point ?
Theoretically, the approximation should get worse for larger differences between x-value and evaluation point, right? I feel like there is some weird example where this doesn't hold.
Thanks and sorry for not being able to write in Latex yet. I'm a newbie lol
No. Here is a counterexample. Let $f(x) = \cos(2\pi x)$. Consider the extreme case of $0$-th order Taylor polynomial: $p(x) = \sum_{k = 0}^0 \frac{f^{(k)}(0)}{k!}x^k = f(0) = 1$.
Over the interval $[0, 1]$, the largest approximation error $|f(x) - p(x)|$ occurs at $x = \frac{1}{2}$, not at $x = 1$.