Differentiation confusion

179 Views Asked by At

I've been reading my textbook, and it tells me how to go about differentiating from first principles, it goes something like this:

$\eqalign{ & \mathop {\lim }\limits_{h \to 0} {{f(x + h) - f(x)} \over {(x + h) - x}} \cr & \mathop {\lim }\limits_{h \to 0} {{f(x + h) - f(x)} \over h} \cr} $

The thing that confuses me is as H tends to zero shouldn't the whole thing become undefined? as the denominator becomes so small as to be negligible? I guess this may arise out of some misunderstanding of limits that I have, I seem to be on very dodgy footing with this whole topic, I'd really appreciate it if someone could help elucidate things a little. Thank you.

5

There are 5 best solutions below

0
On BEST ANSWER

It would be easy to think in terms of sequences. As $h$ approaches zero, you have a sequence of differential coefficients. The derivative is the value to which this sequence converges.

You are right in saying that the fraction is undefined when $h$ = $0$ . However, that's not how we do it. We generate a sequence of differential coefficients for $h$ $\neq$ $0$ - note that this is an infinite sequence, since $h$ goes through an infinity of values as it approaches $0$ - and then we see to which value this sequence converges.

The exact value of the differential coefficient when $h = 0$ is irrelevant, because we can't measure the instantaneous rate of change exactly. We can only look at the average rate of change in an interval around the point of concern and calculate the bound(the value to which the sequence of differential coefficients converges) of this average rate of change.

As an example, consider the function $x^2$. It's differential coefficient is $2x + h$. When we say that its derivative is $2x$, what we mean is not that the value of the differential coefficient when $h = 0$, is $2x$ (In fact, this would be a meaningless statement, because the differential coefficient is defined over an interval and not at a point.), but that the differential coefficient is bounded by the value $2x$, as $h$ approaches $0$.

1
On

As h tends to 0, the whole thing does not (usually) become undefined. Consider, for example, case f(x)=x. You will see that the fraction gives you 1, and it is quite defined.

4
On

Note that the limit is used because naively setting $h=0$ does not work. Try a simple example, such as $f(x)=x^2$. Then $f(x+h)-f(x)=(x+h)^2-x^2=2xh+h^2$. This can nicely be divided by $h\ne0$, resulting in $2x+h$; and now taking the limit as $h\to 0$ is really as simple as plugging in $h=0$.

$$\lim_{h\to0}\frac{f(x+h)-f(x)}{h} = \lim_{h\to0}\frac{(x+h)^2-x^2}{h} = \lim_{h\to0}\frac{2xh+h^2}{h} = \lim_{h\to0}(2x+h) = 2x.$$

0
On

The whole idea of the derivative is figuring out the rate of change (slope) of a function at an instant. If you recall the conventional definition of a slope as rise over run, then the idea here is starting from $x$, we take smaller and smaller intervals of $[x, x+h]$ and calculate the slope until $h$ is nearly $0$. When $h$ is actually $0$, there is no interval, so of course the slope is undefined, but as we approach $0$ we can approximate the instantaneous rate of change.

A good overview of limits can be found here.

0
On

That fraction will only be undefined as h=0, which is very different to saying that h tends to 0. Other than when h=0, there is never any reason to consider it negligible to the point of considering it 0. For motivation of that, consider that as x decreases, $\frac 1 x$ increases - the denominator certainly isn't becoming negligible.

Recall the definition of a limit. $\lim_{h=0}$ does not ask what happens when h=0.