The general two-sided limit is this:
We have some function $f$ "defined on an open interval" (I think this means at least two points exist and we're not at an end of $f$'s domain), containing $c$ (except possibly at $c$, so limits can exist at a point even if the point itself does not). Let $L$ be a real number. Then the statement:
$$\lim_{x \rightarrow c} f(x) = L$$
means that for each $\epsilon > 0$ there exists a $\delta > 0$ such that if $0 < |x - c| < \delta$ is true for all $x$ then $|f(x) - L| < \epsilon$.
I understand this is trying to say that as we get close to $c$ on the $x$-axis we get close to $L$ on the $y$-axis, but I don't understand how any of it is working or what it's proving or why we're defining it this way.
For instance why is it $0 < |x - c| < \delta$ and not just $|x - c| < \delta$ like the epsilon piece is where we say $|f(x) - L| < \epsilon$? (we don't say $0 < |f(x) - L| < \epsilon$ for some reason).
When we say "for all $x$" does this mean it must be true for every single $-\infty \leq x \leq \infty$? Or "for all $x$ in $f$'s domain except the ends"? For all $x$ where exactly?
What exactly is this saying? We pick some $\epsilon$ that brings us arbitrarily close to $L$ on the $y$-axis and say we can find a $\delta$ where the $x$-values in the resulting threshold will put us entirely inside that epsilon threshold on the $y$-axis (i.e. even closer?).
The "absolute value stuff" serves to talk about distances. You want to say "$f(x)$ is within a tolerance $\epsilon$ from $L$ whenever $x$ is sufficiently near $c$". Now writing $|f(x)-L|<\epsilon$ exactly expresses this colloquial "$f(x)$ is within a tolerance $\epsilon$ from $L$", and similarly $|x-c|<\delta$ expresses that the distance between $x$ and $c$ is less than $\delta$.
The limit definition does not prove anything. It is just the mathematically precise formulation of our intuitive idea what it means that "in the limit" $f(x)=L$ when $x=c$. In reality this $x$ is never $=c$, maybe because $c=\infty$, and most probably $f(x)$ is never $=L$ because for all admissible $x$ the value $f(x)$ is in fact $<L$.
Of course this $\epsilon\rightsquigarrow\delta$ business is difficult to grasp. Things would be simpler if we just had $|f(x)-L|\leq C|x-c|$ for a suitable constant $C$. But experience has shown us that in all too may interesting limit situations we do not have such a simple "Lipschitz" estimate for the error, In particular if it is about limits $\lim_{x\to\infty}$.
Remains the question why $x=c$ is excluded in the limit definition. This is a decision made by some $19^{\rm th}$ century mathematicians, and it is still valid in World$\setminus$France. Look up "limite" in the French edition of Wikipedia, and you will find a definition that differs from Baby Rudin et al. in this respect.