The correct understanding of limits

506 Views Asked by At

I want to be sure that I'm understanding the concept of limits correctly.

When I faced the concept of limits for the first time I've been told that the definition of a limit is:

$\lim _{x\to \:a}$ $f(x)$ $=$ $L$

Is that when the values of $x$ get closer to the value $a$, the values of $f(x)$ get closer to the value $L$?

But I see this definition is a vague and wrong one. Besides if we consider the constant function $f(x) = c$ the definition doesn't hold: ''if $x$ gets closer to $a$ the values of $f(x)$ stay the same and don't get closer to any value".

Then I read Tom Apostol's Calculus and the definition the book provides is the statement $\lim _{x\to \:a}$ $f(x)$ $=$ $L$ means that we can make the values of $f(x)$ as close as we please to the number ($L$), provided that we make the values of $x$ sufficiently close to $a$.

This definition provides no ambiguities and makes perfect sense with every function.

My question: Is the first definition I wrote really a wrong one because I see any one that introduces limits begin with definition and am I really understanding what Tom Apostol really wants to say?

Note: What I wrote about the definition that Tom Apostol provides is how I understand it and I know that the the rigor definition is the $\epsilon -\delta$ but I see that the $\epsilon -\delta$ definition is just a rigor translation of what I've said about the definition that Tom Apostol provides.

Correct me if I've written anything wrong.

2

There are 2 best solutions below

3
On

This touches a bit on the pedagogy of math: the first definition isn't so much wrong as it is propaedeutic. It is meant to give you a first feeling for the subject, rather than hitting you with something that covers all the edge cases. But yes, the case of a constant function you mentioned isn't covered by the first definition which, if formalized, would probably look like this:

$\lim_{x\to a}f(x)=L$ if the following holds: when $x$ is sufficiently close to $a$, then $f(x)$ can be made as close to $L$ as we please, and if $x$ is closer to $a$ than $y$ is, then $f(x)$ is closer to $L$ than $f(y)$ is.

The latter sentence reflects the "getting closer" aspect of the first definition, and it's clearly wrong. Not just in your edge case of a constant function, but also for a sine function, because the function is not monotonic on either side of the limit point, as this definition would imply.

Apostols wordy definition is better, but still slightly ambiguous, imho. What does it mean that we can make $f(x)$ close to $L$? For instance, consider the Dirichlet function $D$, which is $1$ for rational and $0$ for irrational arguments. Is $\lim_{x\to0}D(x)=1$? Taken very literally, according to Apostols wordy definition it's true, because in every small neighborhood of $0$ there are always points such that $D(x)=1$, so we can make $D(x)$ as close to $1$ as we want by choosing a "good" $x$ which is sufficiently close to $0$. But the formal definition disagrees: the limit of the Dirichlet function does not exist at any point, because it oscillates way too wildly.

So here's an in-between step towards the formal definition:

$\lim_{x\to a}f(x)=L$ if the following holds: If we pick a positive distance as small as we want, and if we choose $x$ sufficiently close to (but not coinciding with) $a$, then $f(x)$ will be closer to $L$ than the previously chosen distance.

This way, it's clearer that if $x$ is sufficiently close to $a$, then $f(x)$ must be close enough to $L$, instead of there only existing an $x$ in that "sufficiently close region" for which $f(x)$ is close enough to $L$.

The formal definition just makes a logical formula out of this: distances are the absolute values of the differences, the chosen distance is $\varepsilon$, "sufficiently close" means that there exists a $\delta$ such that ... And so on.

1
On

Is that when the values of $x$ get closer to the value $a$, the values of $f(x)$ get closer to the value $L$? But I see this definition is a vague and wrong one.

Well, that's because it's not a definition but rather a paraphrase, and it is a vague one. What it is not supposed to mean is that when $x$-values get closer to $a$, then $y$-values get closer to $L$, i.e. $$ |x_2-a| < |x_1-a| \qquad \not\!\!\!\!\implies \quad |f(x_2)-L| < |f(x_1)-L| $$

What one does instead, is to limit the possible deviation from $L$, i.e. one wants that $|f(x)-L|<\varepsilon$ for all $x$ that are close enough to $a$. The point is then that $\varepsilon$ might be arbitrarily small, i.e. for all values of $\varepsilon>0$ it must be possible to find some $\delta > 0$, such that for all $x$ close enough to $a$, the value of $|f(x)-L|$ must not exceed $\varepsilon$: $$ |x-a| < \delta \quad\implies\quad |f(x)-L| < \varepsilon $$

This means that

the values of $f(x)$ get closer to the value $L$

in the sense that the maximal allowed deviation from $L$ can be made arbitrarily small. This includes the case where $f(x)=L$ for all $x$, because $|f(x)-L|=0<\varepsilon$ is satisfied trivially.

Taken very literally, according to Apostols wordy definition

So the bottom line is: Beware of wordy definitions! And in the math books you mention, the paraphrased definitions are presumably not in a proper definition, i.e. not in a section labeled Definition:. One could use words only, but that would get too lengthy and convoluted.