Recently I've started my calculus classes and till now what I've seem to understood is that calculus is basically finding solutions by approximations on a microscopic level.
So if I have to find find the area covered by a circle, I would need to fit in many (~infinite) triangles (or rectangle) such that their edges appear to be matching with the circle. And now all I need to do is to sum the area of all these, infinite, triangles since we already know their area.
In the above example, we've just seem to approximated that their edges super impose with the circle's curve. If we still zoom in then most probably we could fit in more such triangles.
And this is where my confusion is, at some point, the answer won't be perfect. It would just be a very very close approximation to the actual area of circle. So, doesn't this mean that calculus is an inefficient way?
And oh btw, the above example is based on https://en.wikipedia.org/wiki/Method_of_exhaustion
Let me give another example. Consider a square of 1 unit. The shortest path to travel from one end to another would cover sqrt(2), using Pythagoras theorem. The longest way would be to go through the sides, which will be 2 units. But if we were to choose any other path except the diagonal and which also doesn't crosses itself neither goes back, then also the distance covered would be 2 units. Yes, any path, inside the square would also cover 2 units. Now, consider that you're going in zig zag manner and also very close to the diagonal. As shown in this picture. 
Now, make those zig-zag paths more small. So small that it finally appears to be a straight line, i.e. diagonal. Remember, it is still zig-zag on microscopic level. And over here we meet a paradox. Pythagoras says that its length is sqrt(2) however, choosing any path within square that doesn't crosses and neither goes back should provide the distance as 2 units.
This is something that I'm finding very hard to digest. Does this somewhat hints that calculus isn't the best way out there or maybe, calculus has its own flaw?
Thank you for putting in your efforts to answer this question.

The original poster says: "Yes, any path, inside the square would also cover 2 units." How do they justify this claim?
Edit: Here's one possible answer to the original poster's problem.
Oh, I see! I believe the problem you're pointing at is as follows. Assume: we live on a discrete finite lattice with a minimal element of length 1. Then, there will be no square root of 2, i.e. there would be no diagonal of an elementary square. Consequently, your conclusion would follow. However, if you admit the existence of the diagonal, therefore admitting the existence of the square root of 2, the problem disappears.
Let's construct such set of real numbers.
Consider a unit interval $\left[0,1\right]$ on a real number line. Let there be $c$ elements of $R$ on $\left[0,1\right]$. Number $c$ is infinitely large, of course. We just assume that one is allowed to do arithmetics with it. One such $c$ would be, say, Cantor's continuity $c$, the cardinal number of $R$. So, we conclude here that the idea of the existence of $c$ is fairly familiar and plausible.
Let us ask the following question now: "What is the smallest distance between two elements of $R$?" This question may seem strange at first, but the answer is rather straight-forward. Namely, one simply divides the length of the unit interval, $1$, by the number of elements $c$, obviously, to calculate the smallest distance $d$. The result is $d=1/c=0$. The distance is $0$ because $c$ is infinitely large. In other words, set $R$ is dense. There is no smallest distance.
Now stretch the unit interval by factor $c$! The unit interval is of length $c$ now. If one assumes that the number of elements hasn't changed, then the number of elements on the unit interval is still $c$. Hence, the smallest distance is now $d=c/c=1$. There is the smallest distance now. Notice that the unit interval is just stretched by factor $c$, assuming the number of elements remained intact during stretching. It's the same unit interval from $R$ we are used to, but magnified, as if seen under the looking glass of magnification $c$.
If done like this, then the new set we created by stretching $R$ is no longer dense. One way to look at this phenomenon is to conclude that $R$ is dense relative to one measure, but not relative to another measure.
Let's denote the stretched set by $R_l$, with $l$ standing for "larger". Some rather interesting properties are exhibited by $R_l$. For instance, lengths of curves in $R_l^2$ depend on orientation. This is simply due to the fact that $R_l$ is discrete. Another interesting property is that $R_l$ is well ordered now. Yet another interesting property is that $R_l$ has the smallest element, and this smallest element can be interpreted as an infinitesimal in $R$. Yet another interesting property is that one my accommodate another unit interval from $R$ with $c$ elements, onto the unit interval on $R_l$, having only two elements, $0$ and $1$, in $R_l$, thus creating a dense set once again. Let's call this new dense set $R_{lD}$, with $D$ standing for "dense". Any function defined on $R$ is not defined at all points of $R_{lD}$. In other words, functions continuous on $R$ are not necessarily continuous on $R_{lD}$. The converse statement is also true: functions discontinuous on $R$ may be continuous on $R_{lD}$, depending on how one extends them from $R$ onto $R_{lD}$.
So, you see, the original poster's question may hide some interesting hidden assumptions with some interesting properties. I hope this clarifies my point of view a bit.