What I know:
- A Taylor series is a local polynomial expansion of a function
- It uses derivatives (often many of them)
- It's supposed to make estimations easier/simpler
I'm not great with differentiation but I can say it can be pretty hard. While very accurate, Taylor series aren't 100% accurate compared to the original.
From what I can see, finding derivatives of a function can be quite a lot of work and it's still more accurate to work with the original functions. Take this function for example:
$f(x) = sin(cos(x))*cos(sin(x))$
whose Taylor partial expansion is
$sin(1) + 1/2 x^2 (-sin(1) - cos(1)) + 1/24 x^4 (2 sin(1) + 7 cos(1)) + 1/720 x^6 (23 sin(1) - 76 cos(1)) + O(x^8)$
which takes effort to calculate (though I just used Wolfram Alpha) and looks even longer and in some cases more complicated with many terms. Instead of doing all that, I can just throw some inputs at the original function and sample the output which is quite straightforward.
The question: Why do we transform functions into their Taylor expansions for estimations instead of just sampling the local area with lots of inputs?
There are many applications of the taylor series. The main reason they are often used, is that generally speaking polynomials are easier to deal with analytically than your average nonlinear function. Also you get relatively good control over the truncation error which is also vital for many applications in calculus.
Whether or not calculating a Taylor series make sense for your problem heavily depends on the task at hand. If you just want to plot the function it will most likely not be the best option to calculate a high order Taylor approximation.