Newton/Leibniz invented calculus on approximately 1680's. Cauchy/Weierstrass defined the $\epsilon - \delta$ definition of a limit in approximately 1820's.
So how did they define derivatives?
I know that they looked $\frac{f(a+d) - f(a)}{d}$ for $d$ "a very small number", i.e, a number so that $d^2 = 0$ but $d\neq 0$: "infinitesimals".
They defined $f'(a)= \frac{f(a+d) - f(a)}{d}$, and under this definition, the derivative of $f(x)=3x$ is $\frac{3(a+d)-3a}{d}= \frac{3a+3d-3a}{d}= \frac{3d}{d}= 3$.
Suppose $d^2=0$. Then $d^{-1}(d^2)=d^{-1}0 \implies d=0$. So a real number with this property is zero. So this approach is incompatible with real numbers as we usually define them (the axioms of real numbers).
How does this approach work with a different idea of "numbers"?
This has been formalized in "nonstandard analysis", which uses "Hyperreals" instead of just real numbers. Hyperreals include positive infinitesimals that are positive but smaller than every positive real number, and they have infinities that are positive but bigger than every real number, etc. There's a lot to be said about the construction of the hyperreals, and why a mathematician would want to use this nonstandard framework, and Terence Tao has already said a lot of it well (not recommended for someone without at least good backing in undergrad real analysis). But how derivatives work once you have the hyperreals isn't so complex.
Every function you're likely to come across in calculus that's defined on an interval, automatically has a "hyper" version defined on the hyper version of the interval (throw in infinitesimal shifts and infinities if the interval is unbounded). Basically, you can input hyperreals into the functions you're used to dealing with without a problem. Also, every hyperreal that's finite (less than some integer in absolute value, let's say) has a nearest real number.
Now, we can take $\frac{f(a+h)-f(a)}h$ for some infinitesimal $h$ (either positive or negative), and we'll get some hyperreal number, which will be closest to some real number (depending on $h$), which I'll call $f'_h(a)$. If $f'_h(a)$ is the same real number for every infinitesimal $h$, then we call that real number $f'(a)$.
For an example, consider $f(x)=|x|$. This is $x$ for real $x\ge0$ and $-x$ otherwise. Its hyperreal version is $x$ for hyperreal $x\ge0$ and $-x$ otherwise. Then $f'_h(0)=h/h=1$ if $h$ is a positive infinitesimal, and $-1$ if it's a negative infinitesimal, so $f'(0)$ doesn't exist.
As Ian Mateus already pointed out, Elementary Calculus: An Infinitesimal Approach is a free Calculus textbook based on this sort of thing.