Why is $\lim_{x\to a}\frac{E(x)}{x-a} = 0$, instead of $\lim_{x\to a} E(x) = 0,$ used to explain why linear approximation works?

44 Views Asked by At

In my calculus textbook, the author made the following remark in the chapter about linear approximation:

Let $f$ be a differentiable function with $f'$ continuous. Define $E(x)$ to be the error in the approximation of $f(x)$ using $f(a)$; that is:

$$E(x) = f(x) - [f(a) + f'(a)(x-a)]$$

We have that:

$$\lim_{x\to a}\frac{E(x)}{x-a} = \lim_{x\to a}\frac{f(x) - [f(a) + f'(a)(x-a)]}{x-a} = \lim_{x\to a}\frac{f(x) - f(a)}{x-a} - f'(a) = f'(a) - f'(a) = 0$$

This means that $E(x)$ approaches $0$ faster than $x-a$ does when $x \rightarrow a$. So as $x$ gets near $a$, the error in the linear approximation approaches $0$ faster than $x$ approaches $a$. This is the formal explanation of what we mean when we say that the linear approximation is "close" to $f(x)$ "near" $x=a$.

In a physical sense, I take this to mean that $E(x)$ gets practically close to $0$ far before $x$ gets close to $a$. More broadly, the distance between $E(x)$ and $0$ is far smaller than the distance between $x$ and $a$ as $x$ gets near $a$.

What I struggle to understand is the last sentence in the author's remark, which speaks to the whole purpose of this limit. Clearly, $\lim_{x\to a} E(x) = 0$, which seems to suggest that the linear approximation of $f(a)$ practically becomes its true value as $x$ is close to $a$; why is this fact alone not sufficient to establish that "the linear approximation is close to $f(x)$ near $x = a$," as the author concluded?

2

There are 2 best solutions below

1
On BEST ANSWER

The idea is that $\lim_{x \to a}\frac{E(x)}{x - a} = 0$ says something stronger than just $\lim_{x \to a}E(x) = 0$. The latter tells us that the error $E(x)$ goes to 0 as $x$ gets close to $a$, but doesn't say how fast the limit converges.

As an analogy to understand what I mean by "how fast", the sequence $1, \frac{1}{2}, \frac{1}{3},\frac{1}{4}, \dots$ goes to 0, but $1, \frac{1}{2}, \frac{1}{4}, \frac{1}{8}, \dots$ goes to 0 much faster (the corresponding terms are getting much closer to 0 much sooner in the latter sequence than in the former sequence). Formalizing this notion gives the idea of "order of convergence" (see https://en.wikipedia.org/wiki/Rate_of_convergence).

On the other hand, the limit $\lim_{x \to a}\frac{E(x)}{x - a} = 0$ is saying that the numerator is going to 0 faster than the denominator. As an example to see what I mean, consider the case where $a = 0$ and suppose we had an error function like $E(x) = \sqrt{x}$. Then $\lim_{x \to 0}\frac{\sqrt{x}}{x} = \lim_{x \to 0}\frac{1}{\sqrt{x}}$, which does not exist, as it diverges to infinity. What's happening is both the numerator $\sqrt x$ and the denominator $x$ are approaching 0 as $x \to 0$, but as you can see by testing this out numerically (try this with the sequence $\{\frac{1}{n^2}\}_{n = 1}^\infty$), the denominator $x$ is approaching 0 much faster than the numerator $\sqrt{x}$, so as we get closer to 0, the denominator gets comparatively much smaller than the numerator, and so as we get closer to 0, the fraction $\frac{E(x)}{x}$ will actually get bigger, not smaller!

As your textbook points out, the idea of the derivative and linear approximation is that the error $E(x)$ you get will always approach $0$ as $x \to a$ much faster than the denominator $x - a$ will, and so something like the example in the above paragraph cannot happen, and so you will always get $\lim_{x \to a}\frac{E(x)}{x - a} = 0$.

1
On

If you define $$ E_t(x) = f(x) -[f(a) - t(x-a)] $$ then $E_t(x)$ will have limit $0$ at $a$ for every value of $t$ (assuming $f$ is continuous). All those lines defined by the expression in square brackets are in some sense linear approximations - all that means is that they pass through the point $(a,f(a))$.

Setting $t=f'(a)$ gives you the best linear approximation - the only one that has the right limit when you take $x-a$ into account.