What else do we gain from best linear approximations?

128 Views Asked by At

In single-variable calculus, we're introduced to the idea of best linear approximation, that is:

$$f(x) \approx f(a) + f'(a)(x - a)\tag{1}$$

And this is the best linear approximation for the point $a$. In multivariable calculus, the same concept is presented again, and for two variables, it is:

$$f\left(x,y\right)\approx f\left(a,b\right)+\frac{\partial f}{\partial x}\left(a,b\right)\left(x-a\right)+\frac{\partial f}{\partial y}\left(a,b\right)\left(y-b\right)$$

I've been thinking for a while but couldn't guess why this is important. We could say that in the $(1)$, it gives us the line tangent to the function $f$ at point $f(a)$, this would reveal us how much the function is increasing at that point - but the same can be said using only the derivative.

For multivariable calculus, it's even worse. They tell us that there is a best linear approximation, which is a plane tangent to a surface, but in this case, we are not told what they are important for: The subject is given and then we jump to other subjects.

For a while, I thought that for some functions, there is some points in which it is easy to calculate the value and other points in which it's hard to do it, ex: $\sqrt{4}=2,\sqrt{5}=2.23607\dots$ and then we could anchor to these points $a$, vary a little bit the $x$ and find an approximation, perhaps with an error function that allows an easier computation some way? But then I thought that this kind of problem could be easily bypassed with the use of modern computers. So after all, why are best linear approximations useful? I'm mostly interested in applications to mathematics, but whatever comes in mind, just say it.