I have a very different question.
I was looking into $\epsilon-\delta$ definition of limit. I actually understand the idea, but what I'm wondering is why it was necessary to come up with such an idea in the first place.
Looking at functions, such as $x+1$ , it's super clear that when x approaches to 2, value of $f(x)$ approaches to 3. We could just leave it like that because to me, it already feels a proof that we can plug in 2 and get the value. It definitely means that if I had plugged in 2.0001, i would get a little bigger value, but still close to 2.
I'm just looking at the proof of $\epsilon$-$\delta$ and still can't make myself happy why the definition is even necessary. in one of the books, I read:
The intuitive definition of a limit given in Section 2.2 is inadequate for some purposes because such phrases as “ is close to 2” and “ gets closer and closer to L” are vague.
Well to me, $\epsilon-\delta$ ends up being vague. Maybe I don't see how it's a proof and thats why my confusion. So whatever range I give you $|f(x) - L| < \epsilon$, you should be able to give me the range near $a$, such as $|x-a| < \delta$. So what if I can give you that ? It seems such a redundant proof to me.
Would appreciate your thoughts !
I'm going to answer your question
through the pseudo-historical lens (i.e some of the stuff I say may not be 100% historically accurate, so take this my own vague retelling of history).
You're making argument with pretty simple examples. For these, you should think of the $\epsilon$-$\delta$ definition as a "test-case", i.e does it give you the answer you expect or not (more precisely, does the answer you expect satisfy the $\epsilon$-$\delta$ condition). Note that for centuries, people were perfectly happy to get by without $\epsilon$-$\delta$, and everyone knew what they were roughly talking about.
But, as time passed, the definition of function itself has evolved from something along the lines of "a formula" (essentially a quotient of two analytic functions (actually not even that general perhaps)) into something extremely general. With the broader definition of function, people started to ask whether it is possible to extend the notion of limit for such functions as well. In some cases, the answers were still easy to work out, whereas in other cases, not so much. Furthermore, around the 1700-1800s, people were very interested in dealing with infinite series and infinite products, trying to push the boundaries of what had already been established. People then realized that with the crazier things they were considering (including things like Fourier series, swapping series and derivatives and integrals etc) they couldn't really get correct answers, or even if they did, someone else would get other answers and so on. All of this boiled down to not having a precise enough definition for limits, and by extension, precise theorems guaranteeing the validity of the manipulations they were doing. So one day they were like enough is enough, and people (eg Cauchy and Weierstrass) formulated the precise definition. Btw, just because this super precise definition was available, it didn't mean people leapt in joy, there was (very understandably) a considerable resistance to accepting it due to its terse nature. But soon everyone started to realize its necessity in dealing with the problems they had in mind.
What you're studying now is the product of centuries' worth of distillation of a core idea, and you're studying it "top-down", which is why you probably don't appreciate its importance. But, rest assured, this definition gave people the confidence that what they were doing was actually right (because without a precise definition, you cannot have a precise theorem).
Now, I should mention that $\epsilon$-$\delta$ does not help us "calculate" limits (though sometimes it can easily tell us when something is not the limit, in the sense that it is easy to verify that a given number $\lambda$ does not satisfy the $\epsilon$-$\delta$ definition for $\lim\limits_{x\to a}f(x)$). In fact, in much of modern math, we rarely use the definition to compute things! We use the (very precise, and sometimes, extremely difficult) definitions in order to prove a whole variety of theorems. In the case of limits, these involve things like sum, product, quotient (modulo division by zero) rules, and most importantly, composition, and other things like L'Hopital's rule, FTC etc etc. It is only with these theorems that in practice we compute anything.