This is an attempt to define rigorously from scratch, the following intuitive definition of limits:
$\displaystyle{\lim_{x \to c}f(x)}=L$ means that $f(x)$ can be made arbitrarily close to $L$ by making $x$ be sufficiently close to $c$.
Given the above definition of limits, we can say that $\displaystyle{\lim_{x \to c}f(x)}=L$ means that for every $x \ne c$ in the domain of $f$ such that $f(x)\ne L$ there exists some $x_0$ in the domain of $f$ that is closer to $c$ than $x$ is and for which $f(x_0)$ is closer to $L$ than $f(x)$ is.
Making $f(x)$ be arbitrarily close to $L$:
If for some $x$ you give me, I can give you some $x_0$ such that $f(x_0)$ is closer to $L$ than $f(x)$ is then we will say that $f(x)$ can be made closer to $L$ at that particular $x$ value. If I can do this for every $x\ne c$ in the domain of $f$ such that $f(x)\ne L$ then we will say that $f(x)$ can be made arbitrarily close to L. The relationship between $x$ and its corresponding $x_0$ can be stated as follows: $$|f(x)-L|>|f(x_0)-L|; f(x)\ne0$$
Making $x$ be sufficiently close to $c$:
Now that we have made the first condition true, we need $x_0$ to be closer to $c$ than $x$ is. We don't want an $x_0$ that is farther away from $c$ than $x$ is. If $f(x_0)$ does get $f$ closer to $L$ but $x_0$ is farther away from $c$ than $x$ is and we can't find any $x_0$ which:
- Gets $f(x)$ closer to $L$ and
- Is itself closer to $c$ than $x$ is.
Then we would say that $\displaystyle{\lim_{x \to c}f(x)}$ does not exist. The relationship between $x$ and $x_0$ can be stated as follows: $$|x-c|>|x_0-c|;x\ne c$$
Reason for specifying that $x\ne c$ and $f(x)\ne L$:
If $x$ is exactly at $c$ then it is impossible to make $x$ be any closer to $c$ than it already is. The distance between $x$ and $c$ is $0$ and it can't be made any smaller. Similarly, $f(x)$ can't get any closer to $L$ if $f(x)=L$.
Putting it all together:
$\displaystyle{\lim_{x \to c}f(x)}=L$ means that for every $x\ne c$ in the domain of $f$ such that $f(x)\ne L$, there exists an $x_0$ in the domain of $f$ such that:
$$|x-c|>|x_0-c|$$ and $$|f(x)-L|>|f(x_0)-L|$$
Is my reasoning correct? If not, where did I go wrong?
The intuitive definition that you quoted at the beginning is correct but is too subtle regarding some very important concerns.
It is not enough that there exists some $x_0$ closer to $c$ such that $f(x_0)$ is closer to $L.$ You need there to be an $x_0$ such that every $x_1$ closer to $c$ has $f(x_1)$ as close as some arbitrarily small distance from $L$. Moreover, you need to be able to repeat this feat no matter how small the arbitrarily small distance becomes.
Your interpretation misses the "every", "arbitrary", and "repeat" parts.
For example, consider the function $$ f(x) = \begin{cases}\dfrac{\sin(1/x)}{x} & x \neq 0,\\0 & x = 0. \end{cases}$$ This function crosses the $x$ axis infinitely many times between $-\frac1\pi$ and $\frac1\pi.$ By your definition, $L = \lim_{x\to 0} f(x) = 0,$ because someone can pick any $x$ they like, where $f(x) \neq 0,$ but then there will always be infinitely many $x$-axis crossings between $x$ and $0,$ and you can always find an $x_0$ near one of those $x$-axis crossings such that $f(x_0) = \frac12 f(x),$ which is closer to $L.$
But actually $f(x)$ has no limit at all as $x\to 0.$ It oscillates infinitely many times ever larger and larger as you approach $x=0.$
What your definition misses in this case is that one $x_0$ is not enough. Sure, we can always get $f(x_0)$ closer to zero by choosing $x_0$ carefully, but there are many other $x$-values even closer to $0$ that give much worse values when you apply $f$ to them.
Now consider a second function, $$ g(x) = \begin{cases}x-1 & x < 0,\\ x+1 & x \geq 0. \end{cases}$$
Now again your definition says $L = \lim_{x\to 0} g(x) = 0,$ because given any $x,$ we can simply set $x_0 = \frac12 x,$ and it is guaranteed that $g(x_0)$ will be closer to $L$ than $g(x)$ is.
What your definition missed in this case is that although we can always make $g(x_0)$ be closer, we cannot make it be arbitrarily close to $L = 0.$ The closest we can get is $1$ unit above or below $0.$
I like to think of limits using an adversarial game as an analogy.
The rules of the game are that I choose a number, $x_1,$ and say that the limit of $f(x)$ as $x$ goes to $x_1$ is some number, $L.$ Then my opponent gets to challenge me by specifying a distance from $L.$ I have to respond with a distance from $x_1.$ My opponent then gets to choose any number $x_2$ within my chosen distance from $x_1.$ And that means they get to choose any such number, not necessarily one that I would like them to pick. If $f(x_2)$ is closer to $L$ than the distance my opponent named, I win. Otherwise I lose.
The limit of $f(x)$ as $x\to x_1$ is $L$ if my position in this game is a winning position, that is, there is no way my opponent can challenge me with a distance, no matter how small, that lets my opponent pick an $x_2$ with $f(x_2)$ farther than that distance from $L.$ No matter how small a distance they choose, I always have a winning response.
What are the conditions under which I win? I need to have a function $f$ and a number $x_1$ such that by judiciously choosing neighborhoods around $x_1$ (a neighborhood being all numbers within some chosen distance of $x_1$), I can shrink the range of $f(x)$ on that entire neighborhood down to an arbitrarily small interval around $L.$
The usual epsilon-delta definition of limits is essentially this game translated into mathematical logic, where $\epsilon$ is the distance my opponent chooses and $\delta$ is the distance I choose.
Note that I have only dealt with functions on all real numbers here. There are some additional points that come up when the domain is only a proper subset of the real numbers, as implied by a comment below.
I also want to reiterate the point someone made in a comment under the question, that you do more mathematics by trying to reinterpret the definitions and getting it wrong than you do if you don't even try. So you have a good question. Don't stop trying.