Epsilon delta is super redundant, isn't it?

301 Views Asked by At

I have a very different question.

I was looking into $\epsilon-\delta$ definition of limit. I actually understand the idea, but what I'm wondering is why it was necessary to come up with such an idea in the first place.

Looking at functions, such as $x+1$ , it's super clear that when x approaches to 2, value of $f(x)$ approaches to 3. We could just leave it like that because to me, it already feels a proof that we can plug in 2 and get the value. It definitely means that if I had plugged in 2.0001, i would get a little bigger value, but still close to 2.

I'm just looking at the proof of $\epsilon$-$\delta$ and still can't make myself happy why the definition is even necessary. in one of the books, I read:

The intuitive definition of a limit given in Section 2.2 is inadequate for some purposes because such phrases as “ is close to 2” and “ gets closer and closer to L” are vague.

Well to me, $\epsilon-\delta$ ends up being vague. Maybe I don't see how it's a proof and thats why my confusion. So whatever range I give you $|f(x) - L| < \epsilon$, you should be able to give me the range near $a$, such as $|x-a| < \delta$. So what if I can give you that ? It seems such a redundant proof to me.

Would appreciate your thoughts !

3

There are 3 best solutions below

16
On BEST ANSWER

I'm going to answer your question

why it was necessary to come up with such an idea in the first place.

through the pseudo-historical lens (i.e some of the stuff I say may not be 100% historically accurate, so take this my own vague retelling of history).

You're making argument with pretty simple examples. For these, you should think of the $\epsilon$-$\delta$ definition as a "test-case", i.e does it give you the answer you expect or not (more precisely, does the answer you expect satisfy the $\epsilon$-$\delta$ condition). Note that for centuries, people were perfectly happy to get by without $\epsilon$-$\delta$, and everyone knew what they were roughly talking about.

But, as time passed, the definition of function itself has evolved from something along the lines of "a formula" (essentially a quotient of two analytic functions (actually not even that general perhaps)) into something extremely general. With the broader definition of function, people started to ask whether it is possible to extend the notion of limit for such functions as well. In some cases, the answers were still easy to work out, whereas in other cases, not so much. Furthermore, around the 1700-1800s, people were very interested in dealing with infinite series and infinite products, trying to push the boundaries of what had already been established. People then realized that with the crazier things they were considering (including things like Fourier series, swapping series and derivatives and integrals etc) they couldn't really get correct answers, or even if they did, someone else would get other answers and so on. All of this boiled down to not having a precise enough definition for limits, and by extension, precise theorems guaranteeing the validity of the manipulations they were doing. So one day they were like enough is enough, and people (eg Cauchy and Weierstrass) formulated the precise definition. Btw, just because this super precise definition was available, it didn't mean people leapt in joy, there was (very understandably) a considerable resistance to accepting it due to its terse nature. But soon everyone started to realize its necessity in dealing with the problems they had in mind.

What you're studying now is the product of centuries' worth of distillation of a core idea, and you're studying it "top-down", which is why you probably don't appreciate its importance. But, rest assured, this definition gave people the confidence that what they were doing was actually right (because without a precise definition, you cannot have a precise theorem).


Now, I should mention that $\epsilon$-$\delta$ does not help us "calculate" limits (though sometimes it can easily tell us when something is not the limit, in the sense that it is easy to verify that a given number $\lambda$ does not satisfy the $\epsilon$-$\delta$ definition for $\lim\limits_{x\to a}f(x)$). In fact, in much of modern math, we rarely use the definition to compute things! We use the (very precise, and sometimes, extremely difficult) definitions in order to prove a whole variety of theorems. In the case of limits, these involve things like sum, product, quotient (modulo division by zero) rules, and most importantly, composition, and other things like L'Hopital's rule, FTC etc etc. It is only with these theorems that in practice we compute anything.

10
On

I completely sympathise with the feeling that using $\epsilon$-$\delta$ to prove that $$\lim_{x\to2}x+1=3$$is overkill somehow. However, there are good reasons to do this. Here are two:

  • Many times $\epsilon$-$\delta$ is not overkill, and by then you need to know how to use it. Using it in examples that are so simple you can see the moving parts is how you learn. Go down the kid's slope a few times to learn turning and braking before you try the world cup downhill piste, if you will.
  • "It already feels like proof that we can plug in $2$ and get a value" yes, but ultimately the true justification for this is still $\epsilon$-$\delta$.

Any calculus student with a keen mind will intuitively understand what limits are supposed to be. And they will be able to apply this intuitive understanding to most introductory examples. However, when things become more complicated, in order to actually make sure that everyone agrees exactly on what a limit is, we need to nail down a rigorous definition. It is also necessary to have a definition to work with if you're ever going to do actual calculations, rather than just looking at a problem and intuit a likely solution.

The hope is, of course, that the rigorous definition yields results that agree with most people's intuitions, and that it is relatively easy to use in calculations. And the $\epsilon$-$\delta$ definition does this. Which is why it has stood the test of time and is still used centuries after its conception.

The idea of this exercise is to help you understand the inner workings of $\epsilon$-$\delta$, and to convince you that it does yield the results it should. This exercise is not made to demonstrate the power of $\epsilon$-$\delta$. That comes later.

0
On

I've also found an application for ε−δ when dealing with math brain teasers (which is probably overkill). Take the following question that's a glorified riddle:

Consider the sum:

1/2 ± 1/4 ± 1/8 ± 1/16 ± ...

Depending on the choice of +'s and -'s, does this family of sums have a sum that converges to every real number between 0 and 1?

The answer is yes, and that can be loosely found by epsilon-delta-ing your way to the answer (it's like starting at 1/2 and taking steps left and right of that on the number line).

(and this'll answer your original question best) ε-δ is a helpful way to convert the intuition of limits into proof more rigorously which has been the path of the mathematician for centuries.