My casual study of mathematics and calculus introduced me to the notion that calculus didn't find a firm foundation until Cauchy, Weierstrauss (et al) developed set theory some ~100 years after Newton and Leibniz.
The concept of limits (and their epsilon-delta proofs) was what allowed calculus to get past the shaky logic of infinitesimals.
(The above is just me laying out what I believe to be true, but please feel free to take issue with any/all of it.)
HERE IS MY ACTUAL QUESTION (more or less).
How exactly do limits save calculus?
When thinking about the derivative, we want to contemplate the rate of change of a function wrt its input. Instead of using an infinitesimal change in the input, we take the limit as the change in the input goes to zero.
When we say "take the limit" we mean finding a band around the function value (+/- "epsilon") and derive a band around the input value (+/- "delta") such that any value of the input beyond a certain point is guaranteed to generate a function value within our desired band (i.e., +/- epsilon). This is all rather loose, but hopefully you catch the drift.
BUT HOW DOES THIS HELP SAVE CALCULUS?
Is it because by saying we can take any ARBITRARY epsilon, we are in effect, saying we can take EVERY epsilon > 0? We exhaust every epsilon all the way down to (but not including) 0. This action reminds me a lot of the idea that 0.999... = 1.
Sorry if this isn't clear, but if I had to put this in simplest terms, I guess it boils down to: is "arbitrarily many" the same as "infinitely many"?
Firstly, Calculus was originally formulated with "heuristic" infinitesimals. This was why limits were created. To try and remedy the lack of rigor present in the original arguments with infinitesimals. Secondly, now Calculus has been formulated with rigorous infinitesimals, so the view point that modern infinitesimal calculus is not rigorous is incorrect.
I should at least mention that limits have some odd intuitive gaps in their formulation. What I mean is that, for instance, taking the limit of a function as it approaches infinity is a perfectly rigorous operation, however for the limit to exist, both the left and right side limits must be equal. $$\lim_{x \to \infty} f(x)=\lim_{x \to \infty^+} f(x)=\lim_{x \to \infty^-} f(x)$$ The thing I find odd to this day, with only limits at infinity, is that the notation of approaching infinity from different directions is acceptable. To me at least, this seems to be intuitive nonsense seeing as infinity is: a) not a number in the first place and b) it can't be approached from a greater real number seeing as it is larger than all reals, i.e. the numbers we use in real analysis.
In the end, limits are easier to formulate than infinitesimals. If you don't like the epsilon-delta definitions, you should see the look of horror on some peoples faces when you merely say the word "ultra filter", its enough to say that's how most infinitesimals and transfinite numbers (different sized "infinities") are constructed rigorously. Infinitesimals could be used without evoking horrific derivations and definitions, much like how we teach children about addition and multiplication without justifying it with rings. Sadly, most people have a poor understanding of infinitesimals (I'm no expert either), so they end up not understanding the technicalities of saying that $1 \not = 0.999..$. It should be noted that most Nonstandard Systems (that's what they are called) do accept equality while others have rigorous arguments saying they differ by an infinitesimal. It's semantic black hole when you really understand it...
The Point
Learn limits and the epsilon-delta way, even considering I find the system somewhat flawed. If you really find this kind of thing rubbing you the wrong way, you can investigate other ways of formulating calculus, and by that point you'll hopefully be educated enough to have developed intuition for what you want the system to be like.